The Impact of Driver Compensation on Commercial Motor Vehicle Safety Survey
Contract Management Plan
Scott Fillmon, Perry Jones, and Louis Rabinowitz
Street Legal Industries, Inc.
October 29, 2013
Study Conducted Under Contract to the Federal Motor Carrier Safety Administration
Contact No. DTMC75-12-Q-00022
The primary mission of the Federal Motor Carrier Safety Administration (FMCSA) is to reduce crashes, injuries, and fatalities involving large trucks and buses. Toward that end, FMCSA initiated The Impact of Driver Compensation on Commercial Motor Vehicle Safety Survey. The study is being conducted by Street Legal Industries, Inc. This document provides details about the approach that Street Legal proposes to take to complete the study.
Contents
Develop the Concepts, Methods, and Design of the Survey (Survey Planning) 8
Initial survey design and data collection instrument 15
Conduct Survey Questionnaire Pilot Study 15
Obtain IRB approval to collect study data 16
Prepare collection databases and online collection questionnaire 19
Determine survey frame and select sample 20
Complete carrier SMS data to facilitate carrier contact 26
Submit report describing the carrier frame and carrier safety records 26
Develop data processing protocol 26
Distribute survey introductory letter to carriers chosen as part of the sample 27
Distribute survey participation email 27
Data collection and follow-up 29
Submit report on data analysis of survey results 40
Title: The Impact of Driver Compensation on Commercial Motor Vehicle Safety Survey
Protocol Number: N/A
Date and Version of the Protocol: January 20, 2013, Version 1
IND/IDE Number: N/A
Name and Contact Information:
U.S. Department of Transportation, Federal Motor Carrier Safety Administration
(Sponsoring Organization)
Theresa Hallquist, Research Analyst (FMCSA Contracting Officer Technical Representative)
1200 New Jersey Avenue, SE
Washington, DC 20590
Phone: (202) 366-1064
Street Legal Industries, Inc. (Investigating Organization)
Scott Fillmon, Project Manager
102 Jefferson Court
Oak Ridge, TN 37830
Phone: (210) 284-2298
Names of all institutions that are involved in the research study:
U.S. Department of Transportation, Federal Motor Carrier Safety Administration
Street Legal Industries, Inc.
According to the US Department of Transportation’s (DOT) Federal Motor Carrier Safety Administration (FMCSA) Analysis Division publication, Large Truck and Bus Crash Facts 2010 (2012), the number of large trucks involved in fatal accidents, injury crashes, and property damage crashes decreased considerably in the ten year period from 2000 through 2010. Fatal accidents involving large trucks during the period dropped from 4,995 to 3,484; the number of large trucks involved in injury crashes decreased from 101,000 to 58,000; and the number of large trucks involved in property damage only crashes dropped from 351,000 to 214,000. During the same ten-year period, the number of buses involved in fatal crashes decreased from 325 to 249 (p.1).
The significant decline in serious incidents over the period of performance examined in the DOT document is laudable. However, it is noteworthy that “One or more driver-related factors were recorded for 63 percent of the drivers of large trucks involved in single-vehicle fatal crashes and for 27 percent of drivers of large trucks involved in multiple-vehicle fatal crashes” (p.63). The DOT further reports that the most often cited immediate cause of fatal crashes involving large trucks was speeding (p. 63). It is clear from these findings that the number of crashes involving large trucks can drop even further if inappropriate driver behaviors can be reduced.
The type of research being conducted by Street Legal Industries, Inc. under contract to the Federal Motor Carrier Safety Administration is social science research.
Commercial Motor Vehicle (CMV) driver behavior accounts for a large percentage of crashes. In addition, drivers who are involved in crashes and receive citations for unsafe driving are more likely to be involved in future accidents and receive additional citations. While data showing immediate causes of crashes directly attributable to driver behaviors exist, there have been no studies that examine how a variety of certain variables may lead to the unsafe driving behaviors that cause crashes.
The Federal Motor Carrier Safety Administration (FMCSA) has contracted with Street Legal Industries, Inc. (SLIND, aka the research team), to administer the “Impact of Driver Compensation on Commercial Motor Vehicle Survey” (the study). The primary purpose of the study will be to analyze the possible unintended safety consequences of the various methods in which CMV drivers in the sample are compensated. Should the study show that there is a relationship between the methods drivers are paid and the methods’ effect on safe driving performance, a potential benefit of the study will be to identify the method of compensation that will minimize crashes and unsafe driving behaviors leading to fewer fatalities and injuries. No risks are anticipated as a result of the study.
In addition to the primary purpose of the study, a number of other potentially extraneous variables will be assessed. These variables include the following:
Type of commercial motor vehicle operation (long-haul, short-haul, or line-haul) by size of carrier (very small, small, medium, or large)
Whether for-hire, private, or owner operated and whether the carrier can be characterized as a truckload, less-than-truckload, regional, tanker, or other type of carrier
Number of power units
Average length of haul
Primary commodities carried
Number of regular, full-time drivers the carrier employs
Average driving experience, in years, of drivers working for the companies included in the sample
This data will be used to demonstrate possible relationships of variables as well as determine if the variables may contribute to unintended safety consequences. Unintended safety consequences include driver out-of-service rates, vehicle out-of-service rates, and crash rates. For the purposes of this study, “commercial motor vehicle (CMV)” will refer only to trucks and not include passenger vehicles such as buses.
Behavior Analysis and Safety Improvement Categories (BASICS) – Six categories of unsafe driving practices that are used to “rank entities’ performance relative to their peers” (Green & Blower, 2011; John A. Volpe National Transportation Systems Center):
Unsafe Driving BASIC – Operation of CMV in a dangerous or careless manner. Example violations: speeding, reckless driving, improper lane change, and inattention.
Hours of Service (HOS) Compliance BASIC – Operation of CMVs by drivers who are ill, fatigued, or in non-compliance with the hours-of-service (HOS) regulations. This BASIC includes violations of regulations pertaining to records of duty status (RODS) as they relate to HOS requirements and the management of CMV driver fatigue. This BASIC replaces the Fatigued Driving BASIC as of the publication of Carrier Safety Measurement System (CSMS) Methodology in December 2012.
Driver Fitness BASIC – Operation of CMVs by drivers who are unfit to operate a CMV due to lack of training, experience, or medical qualifications. Example violations: failure to have a valid and appropriate commercial driver’s license and being medically unqualified to operate a CMV.
Controlled Substances and Alcohol BASIC – Operation of CMVs by drivers who are impaired due to alcohol, illegal drugs, and misuse of prescription or over-the-counter medications. Example violations: use or possession of controlled substances or alcohol.
Vehicle Maintenance BASIC – CMV failure due to improper or inadequate maintenance. Example violations: brakes, lights, other mechanical defects, and failure to make required repairs.
Improper Loading/Cargo Securement BASIC – CMV incident resulting from shifting loads, spilled or dropped cargo, and unsafe handling of hazardous materials. Example violations: improper load securement, cargo retention, and hazardous material handling.
Commercial Motor Vehicle (CMV) – According to the FMCSA Regulations, Subpart A – FMCSA, 2012, Regulations, Subpart A – General, § 383.5, Definitions, a “motor vehicle or combination of motor vehicles used in commerce to transport passengers or property if the motor vehicle— (1) Has a gross combination weight rating or gross combination weight of 11,794 kilograms or more (26,001 pounds or more), whichever is greater, inclusive of a towed unit(s) with a gross vehicle weight rating or gross vehicle weight of more than 4,536 kilograms (10,000 pounds), whichever is greater; or (2) Has a gross vehicle weight rating or gross vehicle weight of 11,794 or more kilograms (26,001 pounds or more), whichever is greater; or (3) Is designed to transport 16 or more passengers, including the driver; or (4) Is of any size and is used in the transportation of hazardous materials as defined in this section.” For purposes of this study, trucks will be included but buses and other commercial passenger vehicles will not be included.
Crash Indicator – “Histories or patterns of high crash involvement, including frequency and severity… based on information from state-reported crash reports” (John A. Volpe National Transportation Systems Center, Safety Measurement System (SMS) Methodology Draft, 2007).
MCS-150 Form – “The MCS-150 form is technically the application for a USDOT number commonly called a Motor Carrier Identification Report. This is the form a person or entity must use to request a USDOT number before beginning operations and, more importantly, update their MCS-150 information, if they are already in business.” (Morris as cited in North American Transportation Association, 2011). Data provided on MCS-150 forms is maintained in an FMCS database.
Motor Carrier Management Information System (MCMIS) – According to the Federal Commercial Carrier Safety Administration MCMIS Catalog and Documentation webpage, “MCMIS contains information on the safety fitness of commercial motor carriers (truck & bus) and hazardous material (HM) shippers subject to the Federal Motor Carrier Safety Regulations (FMCSR) and the Hazardous Materials Regulations (HMR).”
Pay per Mile System – Compensation system that pays drivers on a per mile basis. Drivers on this system may or may not be paid for loading, unloading, waiting while being loaded, waiting while being unloaded, breakdowns, detention time (waiting to load or unload), or other off-the-road time. Also known as “hub miles,” a reference to a mechanical odometer mounted to an axle that is called a “hubometer.”
Pay per Hour System – Compensation system that pays drivers on a per hour basis. Drivers on this system may or may not be paid for loading, unloading, waiting while being loaded, waiting while being unloaded, breakdowns, detention time (waiting to load or unload), or other off-the-road time.
Pay per Load System – Compensation system that pays drivers a specific amount for each load delivered.
Proportion of Freight Revenue – Compensation system that pays drivers based on a percentage of freight hauled. Typically drivers are not paid for “empty” miles. The driver may receive additional compensation for fuel surcharges, extra pickups or drops, or for “tarping” loads (installing a tarp over cargo).
Safety Consequence – For the purposes of this study, safety consequences include speeding or other traffic law violations or a SAFETYNET reportable crash.
Safety Measurement System – A methodology developed by the John A. Volpe Transportation Systems Center that “measures the safety of motor carriers and commercial motor vehicle (CMV) drivers to identify and monitor safety problems” (Safety Measurement System (SMS) Methodology Draft, 2007). The SMS “ranks entities’ performance relative to their peers in any of six Behavior Analysis and Safety Improvement Categories (BASICS)” and performs as a “crash indicator.” See definition of Behavior Analysis and Safety Improvement Categories (BASICS) above.
SAFETYNET Reportable Crash – A crash that involves a truck used for commercial purposes, with a gross vehicle weight rating (GVWR) or gross combination weight rating greater than 10,000 pounds, or a commercial bus designed to transport more than eight people, including the driver. The crash must result in at least one fatality, at least one injury involving immediate medical attention away from the crash scene, or at least one vehicle disabled as a result of the crash and transported away from the crash scene. [Ref.: US Department of Labor Large Truck and Bus Crash Facts 2010]
Significance of the Investigation
The study will investigate whether there is a relationship between the way CMV drivers are compensated and incidences of unsafe driving behavior or consequence. In particular, the research team will determine if there is a greater relationship between the pay-per-mile method of paying drivers and safety issues than other compensation methods. Other characteristics will be assessed to take into account the influence that possible extraneous variables may have on safety. Public policy at the Federal level, particularly related to the Fair Labor Standards Act and how drivers can be compensated, might be affected should such a relationship be shown to exist.
An ongoing review of related FMCSA studies and related studies, surveys, and reports of Federal and non-Federal sources is being conducted to minimize duplication of efforts, identify best practices of completed projects, identify statistical information and survey questionnaire items that can be repurposed for the current study, and minimize costs to the government and taxpayers. In addition to the review of existing studies, a review of literature in the fields of research design and statistics will inform the design, development, and implementation of the study.
Several resources have informed the research team’s comprehension of the rate of commercial vehicle crashes that has been necessary for framing the current study. These resources reveal the scope of large truck and bus accidents as well as recent safety trends in the CMV industry and common immediate causes of CMV accidents. One such resource is the US Department of Transportation, Federal Motor Carrier Safety Administration, Analysis Division publication Large Truck and Bus Crash Facts 2010 (2012) which provides data such as fatal crash statistics, property damage only (PDO) crash statistics, profiles of drivers involved in crashes, and causes of crashes. A study completed for FMCSA by the University of Michigan Transportation Research Institute, Tracking the Use of Onboard Safety Technologies Across the Truck Fleet (2009), has proven to be a particularly valuable resource. The University of Michigan study uses the Motor Carrier Management Information System (MCMIS) as its survey frame, which is the same frame being employed for the current study. The research team for the current study has also benchmarked the University of Michigan study to conclude that they should anticipate a rather low initial response rate to the survey questionnaire.
Another resource that detailed three major types of critical events leading to truck crashes (running out of lane, vehicle loss of control due to traveling too fast for conditions, etc., and colliding with the rear end of another vehicle) is The Large Truck Crash Causation Study Summary (2007).
The Volpe Center’s Safety Management System Methodology Draft (2007) and Green and Blower’s “Evaluation of the CSA 2010 Operational Model Test” (2011) both offer valuable definitions of terms and explanations of FMCSA initiatives that will used in the study. Particularly useful are definitions of the six Behavior Analysis and Safety Improvement Categories (BASICS) and crash indicators, as well as a description of the Safety Measurement System (SMS). Green and Blower also offer evidence that the SMS is an important, reliable tool for mitigating crashes. Further, they constructed scatterplots that show associations between carriers’ BASICs percentiles and crash rates.
Howarth, Alton, Arnopolskay, Barr, and Di Domenico prepared the FMCSA technical report “Driver Issues: Commercial Motor Vehicle Safety Literature Review” (2007). While the review found several studies detailing a relationship between pay and safety, most focused on amount of pay or other wage incentives rather than method of compensating CMV drivers (for example, an article by Daniel Rodriguez, Felipe Targa, and Michael Belzer (2006) titled “Pay Incentives and Truck Driver Safety: A Case Study”). However, the review yielded two notable studies that do offer some insight into the correlation between how drivers are compensated and accident rates. Both of the articles are by Kristen Monaco and Emily Williams. In “Accidents and Hours-of-Service Violations Among Over-the-Road Drivers” (2001), the authors found that “Drivers paid by the hour are roughly half as likely as those paid by the mile to doze or fall asleep at the wheel. Hours of sleep is also negatively related to the likelihood of falling asleep at the wheel. Sleeping an additional hour makes a driver 0.85 times as likely to fall asleep at the wheel.” In the second article authored by Monaco and Williams, “Assessing the Determinants of Safety in the Trucking Industry,” they reported that “Drivers paid by percentage of revenue reported a higher percentage of accidents, moving violations, and logbook violations (18%, 38%, and 63%, respectively) than those paid by the mile (13%, 27%, and 55%, respectively).” While this is counter to the hypothesis for the current FMCSA study, the authors go on to point out that “This is not surprising because a driver who is paid by the mile typically gets paid the same amount per mile regardless of the revenue generated by the load (exceptions being premiums paid for hazardous materials, etc.). Drivers who are paid a percentage of revenue, primarily owner-operators, tend to drive more miles and run more loads in order to compensate for any empty or low-revenue loads.”
The research team has reviewed a number of reports resulting from studies that have been conducted for the FMCSA in order to learn more about research designs, report formats, and other characteristics that are typical of studies completed for the agency. Notable examples include “The Impact of Driving, Non-Driving Work, and Rest Breaks on Driving Performance in Commercial Motor Vehicle Operation” (2011), “Efficacy of Web-Based Instruction to Provide Training on Federal Motor Carrier Safety Regulations” (2011), “Improving Heavy-Duty Diesel Truck Ergonomics to Reduce Fatigue and Improve Driver Health and Performance” (Fu, Calgano, Davis, Boulet, & Wasserman, 2010), and “Weather and Climate Impacts on Commercial Motor Vehicle Safety (2011).”
Insight about the preponderance of the pay-per-mile approach to compensating CMV drivers is provided in The Transportation Research Board of the National Academies Transportation Research Circular No. E-C146, Trucking 101: An Industry Primer (Burks, Belzer, Kwan, Pratt, and Shackelford, 2010). The authors cite both a University of Michigan Trucking Industry Program Truck Driver Survey that found that 67% of all over-the-road drivers earn mileage-based pay and a driver compensation study by ATA that revealed that 82% of all team drivers and 60% of all solo drivers are paid per mile.
A number of Federal Government resources have been reviewed by the research team. For instance, the research team consulted the Final Information Quality Bulletin for Peer Review (2004) issued by the Director of the Office of Management and Budget (OMB) when drafting the approach for the initial and final peer reviews that are required for the study and appear in this document. Another OMB resource, Standards and Guidelines for Statistical Surveys (2006), has also been reviewed to ensure compliance with requirements detailed in the Standards. Other resources consulted by the research team to ensure compliance with regulations regarding research include the Confidential Information and Statistical Efficiency Act of 2002, Privacy Act of 1974, Code of Federal Regulations Title 45, Public Welfare, Department of Health and Human Services, Part 46, Protection of Human Subjects (2009), and E-Government Act of 2002.
Finally, the resources that have been of particular value in developing the management plan for the project include the General Accounting Office’s (GAO) Report to Program and Methodology Division, Quantitative Analysis: An Introduction (1992) and the Statistics Canada publication Survey Methods and Practices (2010).
Null hypothesis (H0): The proportion of unsafe driving behaviors is the same for all methods of driver compensation.
Alternative hypothesis (H1): The proportion of unsafe driving behaviors varies depending on method of driver compensation.
The review of literature in this document reveals that considerable research has been done to identify and characterize immediate causes of crashes involving commercial motor vehicle drivers, including unsafe driver behaviors that have resulted in crashes and reportable safety violations. The research team will survey a stratified random sample of CMV carriers to collect data related to characteristics (variables) listed in the Purpose section of this document. Once data is collected from the carriers they will be correlated to safety performance.
The methodology the research team will use to execute the study includes the tasks in Exhibit I, Tasks to be Administered to Complete the Study. These tasks, and the order in which they are administered, are consistent with the FMCSA Request for Quote No. DTMC75-12-Q-00022 (the RFQ) and Office of Management and Budget (OMB) Standards and Guidelines for Statistical Surveys. Each of the tasks and their requisite subtasks will be described in the following sections of this document.
Exhibit I: Tasks to be Administered to Complete the Study |
|
Justification and potential users. FMCSA has initiated this study in order to determine if the pay per mile approach to compensating commercial motor vehicle drivers contributes to safety violations leading to risk of injury or death of CMV drivers and/or other motorists. Specific safety violations that will be examined include speeding and other motor vehicle traffic law violations; reportable crashes that result in fatalities, injuries, or damages to commercial vehicles resulting in their being disabled and having to be removed from the crash scene; violations of hours-of-service regulations; and vehicle out-of-service and/or driver out-of-service status.
Goal and objectives. Consistent with the RFQ authorizing the study, the primary mission of the Federal Motor Carrier Safety Administration (FMCSA) is to reduce crashes, injuries and fatalities involving large trucks and buses. Toward that end, FMCSA commissions studies to identify the causes of crashes and to influence policy to mitigate those causes. The goal of the study described herein is to characterize the industry practices with respect to driver compensation and determine the effect on safety (FMCSA Request for Quote No. DTMC75-12-Q-00022, 2012).
The objectives detailed in the RFQ that support the FMCSA mission include the following:
Objective 1: Complete a survey of trucking companies to determine method of driver compensation and collect other data necessary for executing the study described herein.
Objective 2: Evaluate the impact of variables being studied on CMV safety.
Objective 3: Assess the safety implications of the variables being studied and implications for requirements of the Fair Labor Standards Act.
Decision the study is designed to inform. According to the RFQ, policy decisions may be contingent on the study’s findings:
To explore the relationship between a number of variables and safety, FMCSA will work with other Federal agencies and the Transportation Research Board (TRB) to assess the safety implications of the variables.
Key survey estimates and the precision required of the estimates. The research team will base estimates on a stratified random sample of non-passenger CMV carriers selected from the Motor Carrier Management Information System (MCMIS) database that includes records for more than 730,000 CMV carriers. The estimate will be at a 95% confidence level.
Two peer reviews are required for the study: an initial review with the purpose of gathering input from subject matter experts regarding the study’s methodology and a second peer review of the draft final report. This section of the analysis plan describes the initial peer review and the second peer review is covered in a later section of this document.
Compliance issues. The study being administered by the research team is covered by the Final Information Quality Bulletin for Peer Review (the Bulletin) issued by the Director of the Office of Management and Budget (OMB) on December 16, 2004. The bulletin “includes guidance to federal agencies on what information is subject to peer review, the selection of appropriate peer reviewers, opportunities for public participation, and related issues. The bulletin also defines a peer review planning process that will permit the public and scientific societies to contribute to agency dialogue about which scientific reports merit especially rigorous peer review.”
The Bulletin distinguishes between “influential scientific information” and “highly influential scientific information.” The latter applies more stringent requirements for peer review assessments. According to the bulletin, “the term ‘influential scientific information’ means scientific information the agency reasonably can determine will have or does have a clear and substantial impact on important public policies or private sector decisions.” Further, “the term ‘scientific assessment’ means an evaluation of a body of scientific or technical knowledge, which typically synthesizes multiple factual inputs, data, models, assumptions, and/or applies best professional judgment to bridge uncertainties in the available information. These assessments include, but are not limited to, state-of-science reports; technology assessments; weight-of-evidence analyses; meta-analyses; health, safety, or ecological risk assessments; toxicological characterizations of substances; integrated assessment models; hazard determinations; or exposure assessments.” The Bulletin describes highly influential scientific assessments as those that “could have a potential impact of more than $500 million in any year” or “is novel, controversial, or precedent-setting or has significant interagency interest.” Peer reviews executed for highly influential scientific assessments must meet all of the guidelines prescribed in the Bulletin for influential scientific information as well as more stringent guidelines.
While the guidelines in the Bulletin are both prescriptive and rigorous, they also allow for some leeway for draft documents that are not intended as official disseminations: “In cases where a draft report or other information is released by an agency solely for purposes of peer review, a question may arise as to whether the draft report constitutes an official ‘dissemination’ under information-quality guidelines. Section I instructs agencies to make this clear by presenting the following disclaimer in the report:
THIS INFORMATION IS DISTRIBUTED SOLELY FOR THE PURPOSE OF PRE-DISSEMINATION
PEER REVIEW UNDER APPLICABLE INFORMATION QUALITY GUIDELINES. IT HAS NOT BEEN FORMALLY DISSEMINATED BY [THE AGENCY]. IT DOES NOT REPRESENT AND SHOULD NOT
BE CONSTRUED TO REPRESENT ANY AGENCY DETERMINATION OR POLICY.”
Peer review tasks. A number of valuable resources for conducting peer reviews consistent with the OMB Bulletin have been reviewed by the research team. One document that has been reviewed, Peer Review Process Guide: How to Get the Most out of Your TMIP Peer Review, prepared by the John A. Volpe National Transportation Systems Center, includes a useful table entitled “Tasks in the TMIP Peer Review Process.” While many of the tasks featured in the table are specific to the TMIP (Travel Model Improvement Program) peer reviews, the format of the table and many of the tasks listed provide a template for preparing a table for use in the Impact of Driver Compensation on Commercial Motor Vehicle Safety study. Exhibit II, “Tasks in the Impact of Driver Compensation on Commercial Vehicle Safety Study,” on the following page lists peer review tasks and the parties responsible for completing those tasks.
Exhibit II: Tasks in the Impact of Driver Compensation on Commercial Vehicle Safety Study |
|||
Task |
SLIND Research Team |
FMCSA Staff |
Peer Panelists |
Identify peer review goals |
O |
R |
O |
Plan peer review meeting |
R |
R |
O |
Choose panelists |
C |
R |
O |
Develop agenda |
R |
A |
C |
Invite attendees |
C |
R |
O |
Identify specific issues and questions for peer panelists to address |
R |
A |
C |
Prepare and distribute background material |
R |
A |
O |
Complete meeting logistics |
C |
A |
O |
Develop presentations for peer review meeting |
R |
A |
O |
Conduct pre-peer review conference call |
R |
C |
C |
Host meeting |
TBD |
TBD |
O |
Present study details |
R |
C |
O |
Take notes |
R |
O |
R |
Develop and present recommendations |
O |
O |
R |
Document meeting |
R |
O |
O |
Write first draft of report; distribute to peers and FMCSA for review |
R |
O |
O |
Send report comments to SLIND |
O |
R |
R |
Incorporate changes |
R |
C |
C |
Submit final report to peers and FMCSA for approval |
R |
A |
A |
Analyze recommendations; develop implementation plan |
R |
A |
C |
Conduct post-peer review conference call |
R |
C |
C |
Evaluate progress |
R |
C |
C |
Legend: C = Consult (before decision), A = Approve, R = Responsible, O = No role
Plan peer review meeting. As per the project SOW:
The Contractor’s Research Team will work together with FMCSA to plan the peer review meeting (location, time, duration, and agenda).
It is anticipated that the planning process will be completed via telephone conference calls between the research team and FMCSA.
Choose panelists. FMCSA will select the subject matter experts who will constitute the peer review panel. It is anticipated that the panel will include—but not particularly be limited to—individuals with expertise in research design, statistical analysis, the Fair Labor Standards Act, and the commercial motor carrier industry.
Develop agenda. The research team and FMCSA will determine whether the panel review meeting can be completed in one day or whether a two-day meeting will be necessary. As detailed in the Volpe Peer Review Process Guide, this decision will be based on “the goals, scope, and panelists’ availability.” Whether the meeting lasts one or two days, the agenda will begin with a presentation by the research team on the study’s goals, proposed methodology, perceived challenges, anticipated approach to statistical analysis, and other such matters.
After the research team’s presentation, peer panelists will meet privately to discuss the proposed approach to the study and to discuss recommendations for improving the study. The panelists will then present recommendations allowing the research team to query the panel clarifying recommendations and creating dialogue about how best to implement recommendations. A preliminary draft agenda appears in Exhibit III, Examples of Peer Review Meeting Agendas for One- and Two-Day Meetings, below.
Exhibit III: Examples of Peer Review Meeting Agendas for One- and Two-Day Meetings |
|
Example of a One-Day Peer Review |
|
9:00 ‒ 9:30 |
Introductions, review of agenda |
9:30 ‒ 10:30 |
Presentation of research team’s proposed approach to conducting the study |
10:30 ‒ 10:45 |
Break |
10:45 ‒ 12:00 |
Continuation of research team’s presentation |
12:00 ‒ 1:00 |
Break for lunch (Note: suggest catered lunch for one-day meeting) |
1:00 ‒ 3:00 |
Peer reviewers convene to review and discuss research team approach and formulate recommendations for improving study |
(Table continues on next page)
3:00 ‒ 3:15 |
Break |
3:15 ‒ 5:00 |
Peer reviewers present recommendations for improving study to research team; research team queries panelists to clarify and refine recommendations; adjourn |
|
|
Example of a Two-Day Peer Review |
|
Day One |
|
9:00 ‒ 9:30 |
Introductions, review of agenda |
9:30 ‒ 10:30 |
Presentation of research team’s proposed approach to conducting the study |
10:30 ‒ 10:45 |
Break |
10:45 ‒ 12:00 |
Continuation of research team’s presentation |
12:00 ‒ 1:30 |
Break for lunch |
1:30 ‒ 3:30 |
Peer reviewers convene to review and discuss research team approach and formulate recommendations for improving study |
3:30 ‒ 3:45 |
Break |
3:45 ‒ 5:00 |
Peer reviewers reconvene to complete review and discussion; adjourn for day |
Example of a Two-Day Peer Review |
|
Day Two |
|
9:00 ‒ 9:30 |
Peer reviewers convene to review previous days findings |
9:30 ‒ 10:30 |
Peer reviewers present recommendations for improving study to research team |
10:30 ‒ 10:45 |
Break |
10:45 ‒ 12:00 |
Peer reviewers continue to present recommendations for improving study to research team |
12:00 ‒ 1:30 |
Break for lunch |
1:30 ‒ 2:45 |
Research team queries panelists to clarify and refine recommendations; research team and peer reviewers dialogue about approaches for implementing changes to improve study |
2:45 ‒ 3:00 |
Break |
3:00 ‒ 4:00 |
Group concludes implementation discussion; adjourn |
Invite attendees. FMCSA will invite peer reviewers and determine an optimal date or dates for conducting the peer review meeting. FMCSA will also determine how best to resolve any potential conflicts-of-interest that might exist for any of the potential panelists.
Identify specific issues and questions for peer panelists to address. The research team will identify specific issues and questions about the study’s proposed research methodology, statistical analyses to be applied to collected data, commercial motor vehicle carrier issues, and other such matters. These issues and questions will be captured in a draft paper to be submitted to FMCSA for review. FMCSA will make recommendations about modifying the issues and questions presented by the research team and/or will suggest additional issues and questions to include in the peer review.
Prepare and distribute background material. Based on the issues and questions identified by the research team and reviewed and approved by FMCSA, the research team will prepare a background document and ancillary background materials to be distributed to the peer reviewers. Included will be information about the goals and purpose of the study, the project management plan, the research proposal, sampling plan, draft survey questionnaire, and other pertinent background documentation.
Complete meeting logistics planning. The research team and SLIND support staff will work with FMCSA to complete logistics planning for the peer review meeting.
Develop presentations for peer review meeting. The research team will develop presentations for the peer review meeting to include any additional materials to be distributed at the meeting and a PowerPoint presentation that will be used to summarize the study’s goals, proposed methodology, perceived challenges, and anticipated approach to statistical analysis.
Conduct peer review conference call. The research team will coordinate a pre-peer review conference call to include members of the research team, peer reviewers, and FMCSA staff. The call will take place approximately one week prior to the peer review meeting. The conference call will introduce items to be covered at the peer review meeting as well as last minute logistic items that may require resolution such as ground transportation, meeting, and lodging accommodations.
Host meeting. The decision about meeting location will determine whether the host organization will be SLIND or FMCSA. The host organization will be responsible for coordinating meeting location and space, audio-visual equipment requirements, meals and breaks, and other such logistics.
Present study details. The research team will present study details at the outset of the meeting and answer questions that will be helpful for the peer reviewers to meet and formulate recommendations for improving the study.
Record meeting proceedings. Research team members and SLIND support staff will be responsible for taking notes and documenting the proceedings of those parts of the peer review in which both research team members and peer reviewers take part. One of the peer review members will be designated to document the meeting in which peer reviewers meet as a group to formulate recommendations. Certain portions of meetings may be recorded, should peer reviewers allow.
The research team will finalize the survey design based on the input of subject matter experts in the initial peer review. The final design will be submitted to FMCSA for their review and the review of other subject matter experts or stakeholders that FMCSA deems appropriate. Once the final analysis plan is reviewed and approved the research team will complete development of the data collection instrument. The data collection instrument in the case of this study will be a survey questionnaire. A comprehensive discussion of concepts guiding the design and deployment of the survey instrument are covered in the Collect data subsection that appears later in this section of the analysis plan.
A limited pilot study of trucking companies will be done to test the effectiveness of the study approach and the validity of the survey questions detailed in this document. No payments or gifts will be offered to pilot study participants.
The research team will extract a list of non-passenger motor carriers located within a 200-mile radius of Oak Ridge, Tennessee from MCMIS. The list will be pared down to include only those with current MCS-150 information. The list will be grouped by peer group size and sorted by number of drivers. The research team will contact carriers by phone and email until at least one carrier in each peer group agrees to participate in the pilot study. The study will include fewer than nine (9) carriers. Exhibit IV, FMCSA CMV Peer Group Categories, shows how carriers are categorized by FMCSA.
Exhibit IV: FMCSA CMV Peer Group Categories |
|
Peer Group |
No. of Drivers |
Very Small |
1-5 |
Small |
6-50 |
Medium |
51-500 |
Large |
>500 |
Interviews will be conducted in person at the carriers’ locations, if possible. In cases where in-person interviews cannot be arranged, they will be conducted by telephone.
Companies’ Safety Directors will be asked to participate in interviews. They will be asked to provide feedback about each individual survey item. It is anticipated that interviews will take no longer one hour to complete. The amount of time necessary to complete the survey will be refined based on the interviewers’ pilot experience and feedback from respondents.
Note: The research team conducted pilot interviews during the month of September 2013. A report of this activity is presented in Attachment 1, Pilot Study for The Impact of Driver Compensation on Commercial Motor Vehicle Safety Survey. The research team has revised the survey questionnaire as a result of the pilot study.
Once the data collection package has been approved by FMCSA, the research team will submit the materials to a private internal review board for review and approval. This is an important and necessary step that is covered by regulations advanced by the US Department of Health and Human Services Code of Federal Regulations (45 CFR 46). An important part of the definition of research in the regulations that applies to this study states that
Research means a systematic investigation, including research development, testing and evaluation, designed to develop or contribute to generalizable knowledge. Activities which meet this definition constitute research for purposes of this policy, whether or not they are conducted or supported under a program which is considered research for other purposes.
Further, the regulations provide criteria for when the approval of an internal review board (IRB) is necessary. These criteria are applicable for research involving human subjects, and
Human subject means a living individual . . . about whom an investigator conducting research
Obtains data through intervention or interaction with an individual, or
Uses records gathered on human subjects.
Clearly, this study is covered by both the definition of research as well as the criteria that appear in the regulations.
As per the requirements spelled out in regulations promulgated in the Paperwork Reduction Act (PRA) of 1995, the research team will work with FMCSA to prepare and submit an OMB application package once IRB approval has been obtained. The package will seek required authorization for an information collection request (ICR) to administer the survey. The US Department of Agriculture (USDA) website provides a helpful listing of the required contents of an OMB ICR package:
Completed OMB 83-I form
Supporting Statement and Burden Grid
Copy of any forms, surveys, scripts, screens, etc. used in the collection of information
Copy of the 60-day Federal Register notice
Copies of any pertinent statutes or regulations, which reference collection requirements or provide guidance on what or how information should be collected
Copies of any pertinent handbooks, manuals or other program instructional material
Copies of reports
30-day Federal Register Notice
Exhibit V: Items Required for OMB Approval lists the requirements for completing the OMB package and summarizes the role the research team can contribute in working with FMCSA to complete and submit the package.
Exhibit V: Items Required for OMB Approval |
|
Item Description |
Description of Research Team Support (OMB 83-I Form item number appears in parentheses) |
Complete OMB 83-I Form |
|
Supporting Statement and Burden Grid |
|
Copies of any forms, surveys, scripts, screens, etc. used in the collection of information |
|
Copy of the 60-day Federal Register notice |
|
(Table continues on next page)
Exhibit V: Items Required for OMB Approval |
|
Item Description |
Description of Research Team Support (OMB 83-I Form item number appears in parentheses) |
Copies of any pertinent statutes or regulations, which reference collection requirements or provide guidance on what or how information should be collected |
|
Copies of any pertinent handbooks, manuals or other program instructional materials |
|
Copies of any pertinent handbooks, manuals or other program instructional materials (cont.) |
|
Copies of reports |
|
30-day Federal Register Notice |
|
Once the ICR submission package has been completed and submitted to OMB for approval, the research team will monitor the OMB approval process and respond to any additional requests for information.
The research team will use the Motor Carrier Management Information System (MCMIS) to create the survey frame. An initial task will be to review data in the system to determine any omissions, erroneous inclusions, duplications or misclassifications of units in MCMIS records, and other problems that may lead to coverage errors in the administration of the survey. This review of data and the design of the survey frame will take place at the same time that the OMB review of the ICR package is being completed.
The population for this study is defined as non-passenger commercial motor vehicle carrier companies operating legally in the United States. The survey frame will consist of non-passenger CMV carriers in the MCMIS characterized by FMCSA as having an “active” status. The research team will also design and test the online survey data collection survey questionnaire. Functionality of the online questionnaire will allow survey respondents to input partial data, save their online form, and return at a later time to complete the questionnaire. Data from online questionnaires will be able to be directly input into the research team’s database for analysis.
The research team will use surveying to collect data. The procedure to be followed is shown in the flowchart in Exhibit VI, “Flowchart of Data Collection Procedure,” below. Carriers will be asked their type of operation (such as long-haul, short-haul, and line-haul) and how their drivers are compensated. Methods of compensation will be placed in one of the following eight categories:
Pay by the mile
Pay by the hour
Salaried
Pay by percentage of load
Pay by revenue
Pay by delivery or stop
Use of more than one pay method
Other (to include miscellaneous methods that clearly do not fall in one of the other categories
The research team will assure confidentiality of participant responses. No individual participant will be identified in the survey report or other documentation provided to FMCSA. Data collected will be coded and reported in the aggregate and no individual CMV carrier company’s data will be identifiable.
Exhibit VI: Flowchart of Data Collection Procedure
Those carriers surveyed reporting they compensate drivers using more than one method will be asked to provide additional data, some of which will be about specific drivers. The research team will use collected data to determine whether a correlation of method(s) of driver compensation and safety exists.
The population for this study is defined as commercial motor vehicle carrier companies operating in the US that are not passenger vehicles. The survey frame consists of CMV non-passenger carriers listed in the Motor Carrier Management Information System (MCMIS) characterized by FMCSA as having an “active” status. MCMIS is a database of CMV carriers, the majority of which consist of US companies that are non-passenger carriers, although carriers operating in the US from foreign countries such as Canada and Mexico who have been issued a DOT number are also included in the database and will be part of the survey frame. CMV carrier companies will provide information about drivers in the aggregate (e.g., number of drivers who have been involved in a crash or who have been cited for unsafe driving). The minimum age of drivers covered in the study will be 18. No maximum age is anticipated. Drivers of all racial and ethnic origin will be covered by the research. The study is gender neutral. No vulnerable populations are anticipated to be included in the study.
The research team is using a confidence level of 95% and a 5% margin of error for the study. The 5% margin of error is consistent with the General Accounting Office recommendation. The survey frame will be used to obtain a stratified sample for the study. Strata will be based on peer group categorized by number of power units.
The research team conducted a priori power analysis with the aid of the GPower software application, which is “a free general power analysis program” (Mayr, et al, 2007). To be consistent with a standard advanced by Cohen (1992), the research teach used an effect size (ES) of 0.1. The error probability (α) the team used is .05. While Cohen and others recommend using a power level (1 – β error probability) of .80, the research team opted to use a power level of 0.95 to lessen the possibility of a Type II error. Because the study employs eight (8) categories of driver compensation, seven (7) degrees of freedom (df) were entered into the GPower program to calculate the sample size that will be used for the project.
Exhibit VII, “GPower Program Screenshot,” below, is a screenshot of the GPower program form with the effect size, error probability, power level, and degrees of freedom figures entered in the bottom half of the form and the resulting calculation of sample size in the top half of the form. As shown in the bottom right quadrant of the form, the resulting calculated total sample size is 2,184.
Exhibit VII: GPower Program Screenshot
The research team will collect a response of 2,184 units in order to mitigate the influence the sensitive nature the study may have on response rate and boost the potential for obtaining the minimum number of completed survey questionnaires. The team has set a response rate goal of 20%. Therefore, the team will draw an initial random sample of 10,920 carriers from the survey frame stratified by peer group as illustrated below in Figure VIII. Peer groups are determined by the number of power units the CMV carriers operate and are consistent with FMCSA carrier size categories. There are four categories of peer groups: (1) very small (1–5 power units); (2) small (6–50 power units); (3) medium (51–500 power units); and (4) large (more than 500 power units). Sample stratification will reflect appropriate proportions of units from each peer group. Should the initial random sample not yield the minimum number of units required or should appropriate proportions of peer groups not be achieved, a second and, if necessary, third random sample will be drawn. If appropriate proportions of units are still not be achieved after three samples have been drawn, the research team will weight sample results for nonresponse. Sampling and weighting calculations are described in the Determining sample size and sample size allocation section of this document.
Exhibit VIII, Survey Frame |
|||||
Stratum (Peer Group Categorized by No. of Power Units [PU]) |
No. of Carriers in Survey Frame |
No. of Drivers1 |
Proportion of Drivers(ah)2 |
Calculated Sample Size3 |
Actual Sample Size (5X the Calculated Sample Size)4 |
Very small (1–5 PU) |
517,160 |
941,621 |
.254 |
555 |
3,527 |
Small (6–50 PU) |
187,127 |
1,034,947 |
.279 |
609 |
3,871 |
Med. (51–500 PU) |
24,333 |
358,872 |
.097 |
212 |
1,347 |
Large (>500 PU) |
2,175 |
1,369,164 |
.370 |
808 |
2,175 |
TOTAL |
730,795 |
3,704,604 |
1 |
2,184 |
10,920 |
1Obtained from MCS-150 report data.
2Proportions are based on numbers of drivers in each peer group in relation to the total number of drivers.
3Calculated peer group sample sizes are based on proportions of the total sample of 2,501.
4A factor of 5X the calculated sample size exceeds the total available large peer group size of 2,175; therefore all of the large carriers in the strata will be included in the sample. Actual sample sizes of the other peer groups are adjusted proportionately to equal the total of 10,920 carriers that will be sampled.
The total of 10,920 carriers to be sampled is 5 times as large as the total calculated size for the four peer groups. Proportionately reapportioning the difference of the calculated large peer group sample size and actual number of carrier companies in that cohort (a result of 1,865) to the other three peer groups results in those three smaller peer groups’ calculated sizes being greater than 6 times as large as their calculated sample sizes. In comparison, the calculated sample size of the large peer group is smaller than 3 times its calculated size. However, as the following discussion makes clear, the research team believes oversampling the three smaller groups at using a factor more than twice that of the large group is justified.
Anecdotal information obtained by the research team during its pilot of the survey questionnaire indicates that many commercial motor carriers, particularly very small and small carriers, will likely be reluctant to participate. At the conclusion of piloting events, the research team asked for feedback from participants. Participants from three companies independently offered that they felt that the larger trucking companies would likely participate in the study because they would want to be viewed as cooperative by FMCSA. However, those same participants, some of whom were once over-the-road drivers for very small or small companies, made it clear that many in the industry have the view that FMCSA acts to burden commercial carriers with regulations and are not to be trusted. Therefore, they pointed out, many in the industry would opt out of participating in the survey.
The distrust of FMCSA research and regulations is apparent in industry literature. For example, in an article appearing on the Truckinginfo.com website entitled “What Happens when the Facts Show FMCSA Goofed?” the author provides a strong indictment of the approach FMCSA took to implement hours-of-service regulations that took effect on July 31, 2013. The author goes so far as to say that FMCSA’s HOS regulation will cause experienced drivers to leave the trucking workforce because of burdensome regulations and be replaced with newer, less experienced drivers who will have more crashes.
Similarly, the American Transportation Research Institute’s recent report, Critical Issues in the Trucking Industy‒2013, ranks HOS as the number one issue of concern. The preface to the report includes the following statement: “Additionally, the industry is still sorting through challenges and conflicts with the Federal Motor Carrier Safety Administration’s (FMCSA) Compliance, Safety, Accountability (CSA) initiative, which is now in its third year of national implementation.” That the industry views the FMCSA in a somewhat adversarial way indicates reluctance to participate in voluntary activities such as data collection for the current and past studies.
The research team’s literature search and review yielded very little in the way of research that has already been conducted on the possible relationship of crashes, unsafe driving behaviors, and method of driver compensation method. However, the team was able to benchmark its study approach to one undertaken by the University of Michigan Transportation Research Institute for FMCSA that examined the use of onboard safety technologies. Like the study described herein, the University of Michigan study, “Tracking the Use of Onboard Safety Technologies Across the Truck Fleet” (2009), used the MCMIS database to define the frame and used a random stratified sample approach. As the University of Michigan team reported:
The original estimated response rate to the survey was expected to be approximately 30 percent, but as the survey progressed it became clear that the first sample would not generate the target number of cases. Accordingly, additional samples were drawn . . .
While it is the intent to use a similar approach of multiple sampling to maximize participation, the University of Michigan team’s experience underscores the opinion of those in the current studies piloting activities: the smaller the company, the lower the response rate (the University of Michigan study used six levels of carriers characterized by number of power units; as described elsewhere in this document, the current study uses four levels). Exhibit XI, Survey Statistics, displays a portion of a table that appears on the University of Michigan study.
Exhibit IX: Survey Statistics
STRATA |
REFUSALS |
NONRESPONSE |
NUMBER OF COMPANIES IN SAMPLE |
RESPONSE RATE |
Strata 1: 1-3 Trucks |
168 |
1184 |
1500 |
10% |
Strata 2: 4-20 Trucks |
169 |
1198 |
1500 |
9% |
Strata 3: 21-55 Trucks |
230 |
1061 |
1500 |
14% |
Strata 4: 56-100 Trucks |
150 |
982 |
1334 |
15% |
Strata 5: 101-999 Trucks |
119 |
987 |
1333 |
17% |
Strata 6: 1000+ Trucks |
32 |
216 |
333 |
26% |
Total |
868 |
5628 |
7500 |
13% |
The research team has already conducted a comprehensive literature search and review that has yielded the University of Michigan study cited in the previous section. The study was a means of benchmarking external data that provides some validation to the approach that the research team for the current study is taking to maximize response rate and minimize sample bias (tantamount to comparing surveys to external data).
The research team also has access to the FMCS Safety Management System (SMS) database. This resource includes crash data and data from reports of different types of driver infractions provided by various law enforcement agencies throughout the United States. While the SMS includes only a fraction of the total active carriers (approximately 201,000 of the 731,000 active carriers), the team can compare relative rates of infractions by drivers in each of the FMCSA CMV Peer Group Categories described in a previous section of this document (very small, small, medium, and large). By comparing the percentages of infractions by the drivers in various CMV Peer Group Categories (which constitutes the study’s sample strata) to the results of survey responses and nonresponses, the research team can mitigate possible nonresponse bias by weighting responses for any categories that are under-represented.
The research team will conduct research to gather email addresses and other contact information for individuals who will be asked to respond to survey questionnaire requests. This information will be added to the carrier data to facilitate contact with potential survey participants.
The research team will submit a document to FMCSA summarizing the result of the review of the survey frame and carrier safety records. The report will clearly highlight the data that is suspected of being inaccurate or otherwise faulty. FMCSA will review the report and the review team will proceed after receiving FMCSA feedback.
The research team will develop a protocol for the secure handling of data collected in the study. Exhibit X, Data Processing Protocol, lists the issues related to data processing along with summary statements of the suggested approach the research team will employ to efficiently and effectively deal with those issues. The data processing protocol will be submitted to FMCSA for review and approval once it is fully developed.
Exhibit X: Data Processing Protocol |
|
Issue |
Research Team Approach |
Format of data |
The format of data will be considered as the survey questionnaire is being designed and developed in order to minimize issues after data has been collected. |
De-identification of data for main study database |
A scheme such as coding data to ensure confidentiality. |
Data capture |
Data capture is the process of converting information provided by a respondent into an electronic format. Data will be self-enumerated by respondents using the questionnaire. Team members responsible for entering data into electronic forms will be trained to do so. |
Securely transmitting the data between research partners |
Data will be transmitted between research partners using reliable and secure cloud computing technology. The decision about which cloud computing service the research team will use will be determined by an evaluation of providers on criteria such as ability of vendor to encrypt data, ability of vendor to password protect files that team members share, and the ability to set up folder permissions (authorizations) for team members to share data at various levels. |
Storage of the data |
Data will be stored in databases on a secure server. The backups of collected data will be completed on a daily basis and stored at a remote site using secure cloud computing technology. |
Quality assurance |
Senior members of the research team will review completed questionnaires to ensure that data has been entered correctly, edits have been applied appropriately, and that missing information is collected in a timely fashion. |
Document control |
A robust document control procedure will be implemented to be able to account for each in-process questionnaire. The document control procedure will also be used to track changes made to any data that has been collected. |
A copy of a letter issued by FMCSA describing the purpose of the study to carriers in the sample will be emailed prior to the research team’s initial survey deployment. The letter will inform carriers that they will be contacted and asked to provide information about their type of operation and the method(s) they use to pay drivers. Carriers will also be told how the data they provide will be used and the purpose of the study. They will be told when they can expect to be contacted and that they may be asked to provide information about individual drivers working for their company. Finally, the letter will include a web address that will enable potential study participants to access a page that will provide more comprehensive information about the study, including information that underscores the legitimacy of the project and information about how to contact the FMCSA study project officer for further information.
An email message announcing the web-based survey, providing instructions for completing the survey questionnaire, and setting the deadline for completing the questionnaire will be distributed to potential participants. No payments or gifts will be offered to potential survey participants. A flowchart illustrating the procedure for completing the web-based questionnaire is shown in Exhibit XI, Procedure for Accessing and Completing Web-based Survey Questionnaire. Data collected will yield estimates of various types of carrier operations such as long-haul, short-haul, and line-haul, types of commodities hauled, number of power units in their fleets, and other information. The survey will also determine the percentages of carriers using various methods to compensate drivers such as pay-per-mile, pay-per-load, and hourly rate. The research team will monitor survey responses and send follow-up email reminders to maximize survey response.
The
research team will design and develop training to be delivered to
surveyors. Topics covered by the training will include—but not
be limited to—surveying best practices, tips for dealing with
reluctant subjects, safeguarding personally identifiable information,
and how to enter data into computer forms. The research team will
also produce a survey script that will ensure that surveyors use a
consistent data collection approach and capture all required data.
It is anticipated that a portion of carriers participating in the survey will report that they use only one method of paying drivers. Representatives of carriers that use a variety of ways to pay drivers are likely to need time to do some research about specific drivers. Those participants in the initial survey will have the option of a follow-up call from the surveyor or filling out the on-line form to provide driver-level data. Representatives of carriers using a variety of approaches will be asked to provide information about how specific, randomly-selected drivers with a record of a safety violation or violations are compensated.
Regardless of the approach that participants choose to provide driver-level data, the burden will be minimal. It is anticipated that the initial survey items will take considerably less than thirty minutes to complete. If the participant can provide driver-level data without having to research the data and opts to provide that data during the initial call, it should take no longer than five minutes to provide that additional information.
The burden to conduct research to provide driver-level data should not be great for most carriers that have sufficient records. In most cases, participants will be able to complete that research in less than one hour. Follow-up phone interviews will take no longer than five minutes and likewise, those who enter data via the on-line form should be able to do so within five minutes.
Exhibit XII: Auto–Response Logic for Online Questionnaire
As survey questionnaires are completed by participants, data will automatically be input into the survey database. The system will be designed to measure completeness of entry. Incomplete surveys will automatically generate an email reminder that will be distributed to the participant. Two additional reminders will be sent at three-day intervals to prompt the participant to complete the questionnaire. The emails will contain a link returning the participant to the incomplete portion of the questionnaire. If the participant does not return to complete the questionnaire they will be contacted directly by phone to request completion of the questionnaire and offer an opportunity to complete the remaining question over the phone at that time. If after these efforts the participant does not complete the questionnaire then no further contact will be made. The questionnaire will be closed and an email thanking the participant for their effort will be sent. This auto-response logic is illustrated in Exhibit XII, Auto-Response Logic for Online Questionnaire. A rudimentary analysis of completed questionnaires will be done by the research team to determine completeness of responses and to identify any potential erroneous responses. In such cases, the research team will contact specific participants to validate and edit data, as appropriate, to maximize the integrity of collected data.
Some carriers in the sample may not have had access to necessary information to respond to items in the second portion of the survey interview. Therefore, follow-up phone calls will be made. An analysis of all units in the sample will be done. Criteria that will be used to characterize “safety” in this analysis includes driver out-of-service safety rates, motor vehicle declared out-of-service rates, unsafe driving violations (such as speeding or illegal lane changes) and crash rates. A calculation to determine a possible correlation of safety and pay method will then be conducted.
The research team will compare data for how specific drivers with violations are compensated to the way carriers compensate drivers in order to determine if a relationship between compensation method and safety exists.
Approach to maximize response rate. The study team recognizes that it will be collecting information that some carriers in the sample consider sensitive. Therefore, a number of techniques will be used to maximize response rate:
Participant contact information will be verified and completed with formal name, title, phone number, and email address prior to contact to ensure accuracy and maintain formality.
FMCSA will send an introductory letter to participants describing the purpose of the study and encouraging carriers to participate.
The questionnaire will be designed to ensure ease of entry and clarity of questions. Terms will be described in context and defined to ensure understanding.
The questionnaire will be administered online. This will minimize the burden to the participant and allow them to complete the questionnaire during times of convenience.
Participants will be contacted by auto-generated email and reminders to prompt the participant to complete the survey in a timely manner.
For phone contacts, scripts will be used to ensure that interviews are compact and take as little time as possible to conduct. Surveyors will be trained and provided glossaries of terms that can be used should a participant require terms be explained.
Stratified sample populations will be monitored during collection and should results return an unacceptable response rate, the research team will select a second random sample of carriers within the affected population to solicit additional responses.
Respondent privacy and confidentiality of information will be ensured through application of Federal regulations including, but not limited to, Confidential Information and Statistical Efficiency Act of 2002, Privacy Act of 1974, and E-Government Act of 2002. Reinforcement of privacy and confidentiality policy will be provided to potential respondents in the written transmittal announcing the survey that will be signed by an appropriate FMCSA official, and as an opening statement in any other written or oral communication made by members of the research team to respondents.
Validity and reliability of data collected. There are two types of non-sampling errors that can compromise the quality of the study: random errors and systematic errors. Poor survey questionnaire design and deployment can lead to significant non-sampling error, particularly of the systematic variety. Good discussions of errors that result from poor surveying can be found in the Statistics Canada publication Survey Methods and Practices and in the GAO’s Program Evaluation and Methodology Division publication Developing and Using Questionnaires. The Statistics Canada publication lists four sources of non-sampling errors:
Coverage errors consist of omissions, erroneous inclusions, duplications, and misclassifications of units in the survey frame . . . that affect every estimate produced by the survey.
Measurement error is the difference between the recorded response to a question and the ‘true’ value. One of the main causes of measurement error is misunderstanding on the part of the respondent or interviewer . . . (and) may result from:
the use of technical jargon;
the lack of clarity of the concepts (i.e., use of non-standard concepts);
poorly worded questions;
inadequate interviewer training;
false information given (i.e., recall error, or lack of ready sources of information;
a language barrier;
poor translation (when several languages are used).
Nonresponse error can take two forms: item (or partial) nonresponse and total nonresponse. Item nonresponse occurs when information is provided for only some items, such as when the respondent responds to only part of the questionnaire. Total nonresponse occurs when all or almost all data for a sampling unit are missing.
Processing error can occur at several points during the transformation of data obtained during collection into a form that is suitable for tabulation and analysis. Because processing is time-consuming and resource intensive, it a potentially a source for errors. Another type of processing error that can occur is when coding is applied to open questions that require interpretation and judgment. Data capture processing errors result when data are not entered into the computer exactly as they appear on the questionnaire. Editing and imputation processing errors can occur when faulty replacement values are assigned to resolve problems of missing, invalid, or inconsistent data.
The GSA document includes a discussion of non-sampling errors that are introduced in a study as a result of inappropriate questions on a survey. Questions are inappropriate if they
are not relevant to the evaluation goals;
are perceived as an effort to obtain biased or one-sided results;
cannot or will not be answered accurately;
are not geared to the respondent’s depth and range of information, knowledge, and perceptions;
are not perceived by respondents as logical and necessary;
require an unreasonable effort to answer;
are threatening or embarrassing;
are vague or ambiguous; or
are unfair.
Exhibit XIIII, Mitigating Risks of Non-sampling Errors in Surveys, lists the errors discussed in this section along with a statement summarizing the strategies that the research team will use to mitigate those risks. Similarly, Exhibit XIV, Mitigating Risks Created by Inappropriate Survey Items, lists inappropriate questions discussed in this section along with a statement summarizing strategies that the research team will use to mitigate those risks.
Exhibit XIII: Mitigating Risks of Non-sampling Errors in Surveys |
||
Risk |
Strategies for Mitigation |
|
Coverage error |
|
|
Measurement error |
|
|
Nonresponse error |
|
|
Processing error |
|
|
Exhibit XIV: Mitigating Risks Created by Inappropriate Survey Items |
||
Risk |
Strategies for Mitigation |
|
Survey item not relevant to evaluation goals |
|
|
Unbalanced line of inquiry |
|
|
Items that cannot or will not be answered accurately |
|
|
Items that are not geared to respondent’s depth and range of information, knowledge, and perceptions |
|
|
Items that respondents perceive as illogical or unnecessary |
|
|
Items that require unreasonable effort to answer |
|
|
Threatening or embarrassing items |
|
|
Vague or ambiguous items |
|
|
Improper qualification |
|
|
Abstract concepts |
|
|
Unfair items |
|
Determining response rate goal. As mentioned earlier in this document, the results of this study may influence changes to Fair Labor Standards Act exemption standards for CMV drivers. Such a change may result in significant financial impact on CMV companies. Therefore, due to the sensitive nature of the data that will be elicited, some companies invited to participate in the study may decline. However, because of the importance of the study, it is critical that an appropriate response rate be achieved. The number of CMV carriers in the sample anticipated to participate is 20%.
Variance estimators and inference methods. The four peer groups (very small, small, medium, and large) will likely be very different with respect to data collected from the survey. Therefore, a stratified random sample design is being used. The research team believes this approach to sampling and estimators employed will lead to greater statistical efficiency.
The first step in estimation will be to assign a weight to each sampled unit. To begin this process, inclusion probabilities will be calculated for each stratum as follows:
Stratum 1, Very Small CMV Carriers: |
Stratum 2, Small CMV Carriers: |
Stratum 3, Medium CMV Carriers: |
Stratum 4, Large CMV Carriers:
|
|
|
|
|
Where πx are probabilities of selection, nx are the selected samples, and Nx are the strata.
The design weight, wd, for each unit sampled and will be the inverse of its inclusion probability, π. The standard formula for determining design weight will be used:
In cases in which all or almost all data for a sampled unit is missing (after repeated attempts to secure data from a non-responding units), weight adjustments for nonresponse will be calculated. The approach to calculating nonresponse weight is covered in the subsection entitled Weighting for nonresponse.
Determining sample size, probability weights, and sample size allocation. Determining an optimal sample size is a function of weighing precision requirements of estimates against operational constraints such as resources and time. Given the impact the results of the study may have on how CMV drivers are treated under the Fair Labor Standards Act as discussed in the earlier subsections Decision the study is designed to inform and Key survey estimates and the precision required of the estimates, it is critical that estimations be as precise as possible. Readily available data tables and on-line calculators are helpful in determining sample sizes for populations of various sizes. However, because a stratified sample design will be used in this study, the research team will be required to calculate the sample allocation across various strata. The research team will use a fixed sample size approach in which a fixed sample size n is allocated to a stratum in a specified manner. The proportion of the sample allocated to the hth stratum is denoted as ah=nh/n, where each ah is between 0 and 1 inclusively (i.e., 0≤ah≤1) and the sum of the ah’s is equal to 1, i.e.:
Therefore, for each stratum h, the sample size nh is equal to the product of the total sample size and the proportion ah of the sample coming from that particular stratum:
nh = n x ah
Weighting for nonresponse. The research team will use the following approach advanced in the Statistics Canada publication Survey Methods and Practices (2010) to weight for nonresponse. Assuming a population of 100 and a desired sample size of 25 units, the design weight for every sampled unit would be wd = 4. If only 20 completed questionnaires are obtained, this number constitutes our final sample size. Assuming the responding units can be used to represent both responding and non-responding units, the nonresponse adjustment factor is:
n/nr = 25/20
= 1.25
= 5
“Therefore, each respondent represents 5 people in the survey population. A final weight of 5 is assigned to each unit on the data file” (p. 124).
A similar approach will be applied to various strata in the stratified random sample approach used for this study.
The analyses of survey data will be used for both descriptive and analytic purposes. Where appropriate, descriptive data will be used to generate tables and charts to present convenient displays of summary data that will appear in the report on data analysis. A bar chart showing the incidences of safety consequences for the various types of compensation approaches that carriers use is an example of the type of chart that might be used.
Because “methods of compensation” consist of categorical, nominal data, much of the analysis of data will be done using the chi square (χ2) test. For instance, in analyzing whether a relationship of method of compensation and frequency of crashes exists, a table such as the following can be implemented:
One-Way Chi Square (χ2): Crash Frequency |
|||||
Method of Compensation |
Subtotal of Sample by Comp Method (nX) |
Proportion of Sample(1) (nx/k) |
Expected(2) (EX) |
Observed(3) (OX) |
χ2 = ∑ (O - E)2/E |
By the Mile |
nM |
nM/k |
EM |
OM |
(OM - EM)2/EM |
By the Hour |
nH |
nH/k |
EH |
OH |
(OH – EH)2/EH |
Salaried |
nS |
nS/k |
ES |
OS |
(OS – ES)2/ES |
By Percentage of Load |
nP |
nP/k |
EP |
OP |
(OP – EP)2/EP |
By Revenue |
nR |
nR/k |
ER |
OR |
(OR – ER)2/ER |
By Delivery or Stop |
nD |
nD/k |
ED |
OD |
(OD – ED)2/ED |
Other |
nO |
nO/k |
EO |
OO |
(OO – EO)2/EO |
Sums |
n |
1 |
∑E |
∑o |
∑ (O - E)2/E |
____________________
(1)Where nX is the total number of drivers in the sample paid by a particular method and k is the total number of drivers in the sample. Note that k used in the study is TBD based on an adequate number of responses for particular methods (>5). Should a method not yield the minimum required number of responses, it may be included in the "Other" category. Similarly, should there be a large enough number of participants reporting a specific, unanticipated method in the "Other" category; it may constitute a separate category not listed in the table, above.
(2)The expected values are derived by multiplying the proportion of drivers paid using the compensation method by the total number for crashes in the sample. Note that the expected number is theoretical.
(3)The observed values are numbers of crashes for drivers paid by each method of compensation obtained from the survey results.
A hypothetical example of using chi square analysis that results in rejection of the null hypothesis would be similar to the following:
Example of One-Way Chi Square (χ2): Crash Frequency, Null Hypothesis Rejected |
||||||
Method of Compensation |
Subtotal of Sample by Comp Method (nX) |
Proportion of Sample (nx/k) |
Total No. of Crashes |
Expected (EX) |
Observed (OX) |
χ2 = ∑ (O - E)2/E |
By the Mile |
1,243 |
.67 |
137 |
91.33 |
111 |
4.23 |
By the Hour |
317 |
.17 |
137 |
22.83 |
12 |
5.14 |
Salaried |
177 |
.13 |
137 |
17.13 |
9 |
3.85 |
By Percentage of Load |
93 |
.02 |
137 |
2.28 |
2 |
0.04 |
By Revenue |
68 |
.01 |
137 |
1.14 |
1 |
0.02 |
By Delivery or Stop |
22 |
.01 |
137 |
1.14 |
0 |
1.14 |
Other |
51 |
.01 |
137 |
1.14 |
2 |
0.65 |
Sums |
1,971 |
1 |
137 |
137 |
137 |
15.07 |
At least one of the proportions of accidents for each compensation method in this example is significantly different than its corresponding proportion of drivers paid by that compensation method. Therefore, the null hypothesis is rejected at .05 level of significance, given df = 6 and critical value of 12.59. In this example, the number of observed accidents for the “By the Mile” category, 111, is far greater than the expected number of accidents for that category, 91.33. Conversely, the observed accidents for other categories, such as the “By the Hour” and “Salaried” categories are significantly less than expected.
A hypothetical example for which we would fail to reject the null hypothesis would be similar to the following:
Example of One-Way Chi Square (χ2): Crash Frequency, Fail to Reject Null Hypothesis |
||||||
Method of Compensation |
Subtotal of Sample by Comp Method (nX) |
Proportion of Sample (nx/k) |
Total No. of Crashes |
Expected (EX) |
Observed (OX) |
χ2 = ∑ (O - E)2/E |
By the Mile |
1,243 |
.67 |
137 |
91.33 |
93 |
0.03 |
By the Hour |
317 |
.17 |
137 |
22.83 |
20 |
0.35 |
Salaried |
177 |
.13 |
137 |
17.13 |
18 |
0.04 |
By Percentage of Load |
93 |
.02 |
137 |
2.28 |
2 |
0.04 |
By Revenue |
68 |
.01 |
137 |
1.14 |
2 |
0.65 |
By Delivery or Stop |
22 |
.01 |
137 |
1.14 |
1 |
0.02 |
Other |
51 |
.01 |
137 |
1.14 |
1 |
0.02 |
Sums |
1,971 |
1 |
137 |
137 |
137 |
1.14 |
The proportion of accidents for each compensation method is the same as the proportion of drivers paid by each compensation method, therefore, we fail to reject the null hypothesis at .05 level of significance, given df = 6 and critical value of 12.59.
In addition to the use of chi square, the study team will use linear regression as a means of mitigating the impact of variables other than method of compensation act as independent (or, in the case of linear regression, explanatory) variables. The dependent (response) variables examined using linear regression will be the same as those examined in the chi square calculations executed for the study and include incidences of crashes, as well as safety violations such as speeding, illegal lane changes, and reckless driving.
Data for a linear regression calculation can be used to construct a scatterplot that provides a visual indication of the strength of relationships between the explanatory and response variables. A regression line can then be obtained that is the best fitting straight line through the plotted points on the scatterplot. In this study, correlation coefficients will be calculated using a computer software program. The computer calculation is based on a numeric calculation used to obtain correlation coefficients yielding values between -1 and 1 and indicating the strength of relationships between the explanatory and response variables.
The formula for the numeric calculation is:
y = α + βx + ε
Where x is the explanatory variable and y is the response variable. The slope of the line is represented by β, and α is the intercept which is the value of y when x = 0. ε is what is known as the residual, or error, that is constituted of distances from points on the scatterplot to the line of best fit.
As an example, assume the table below has been populated with 2,000 pairs of variables obtained from the study sample. In this example, the numbers in the x column represent the explanatory variable “years of experience as a CMV driver” and the numbers in the y column represent the response variable “number of speeding citations” each of the drivers in the sample were issued during the period-of-performance for the study.
From our collection of 2,000 paired explanatory and response variables, we obtain a correlation coefficient of -0.017 (rounded to three places). Such a result indicates that no correlation between the explanatory and response variables exists. However, if the result of the calculation was 0.749, we could be confident in saying that a strong positive correlation between the explanatory and response variables exists.
The research team will draft a report for submission to FMCSA that will summarize the results of the survey and data analysis. The report will include information on the methods, protocols, and procedures used in the study along with the results and analyses. Once FMCSA has reviewed and provided feedback on the document, the research team will make edits as necessary. The research team will also deliver a public use dataset of the survey results. No personally identifiable commercial motor vehicle driver information will appear in the dataset.
Survey results will be distributed to appropriate stakeholders and the research team will conduct interviews or focus groups with them to meet Objective 3 of the study, Assess the safety implications of the commercial driver compensation and their exemption from overtime pay requirements of the Fair Labor Standards Act.
The review team will write a draft final report that provides information on all aspects of the study. The report will be delivered to FMCSA who will review the report and provide written comments or written approval of the draft document within fifteen days. Should FMCSA provide comments, the research team will make necessary edits and return the final version of the report within fifteen days.
A second peer review of the draft final report will be executed. The research team will write a report compiling information obtained from the peer review along with a response to the peer review.
The contractor will schedule a final briefing of specific FMCSA officials at headquarters to take place within 60 days from the expiration of the contract. The time and date of the briefing will be jointly determined by FMCSA and the contractor.
Based on the feedback provided by FMCSA on the draft report and the final briefing, along with feedback received during the peer review, the final report will be updated, made 508 compliant, and delivered to FMCSA. The deliverables provided by the contractor will comply with US Section 508 of the Rehabilitation Act and Access Board Standards.
American Transporation Research Institute. Critical Issues in the Trucking Industry—2013. 2013.
Belzowski, B. M., Blower, D., Woodroofe, J., & Green, P. E. “Tracking the Use of Onboard Safety Technologies across the Truck Fleet.” 2009.
Burks, S., Belzer, M., Kwan, Q., Pratt, S., & Shackelford, S. “Trucking 101: An Industry Primer.” The Transportation Research Board of the National Academies Transportation Research Circular No. E-C146. 2010.
Code of Federal Regulations. Title 45, Public Welfare, Department of Health and Human Services, Part 46, Protection of Human Subjects. 2009.
Cohen, J. “A Power Primer.” Psychological Bulletin, 112 (1): 1992. 159.
Faul, F., Erdfelder, E., Lang, A-G., & Buchner, A. “G*Power 3: A flexible statistical analysis program for social, behavioral, and biomedical sciences.” Behavior Research Methods, 39 (2): 2007. 175-191.
Federal Commercial Carrier Safety Administration. MCMIS Catalog and Documentation webpage.
Federal Motor Carrier Safety Administration Regulations, Subpart A – FMCSA, 2012, Regulations, Subpart A – General, § 383.5, Definitions.
Federal Motor Carrier Safety Administration. Request for Quote No. DTMC75-12-Q-00022. 2012.
Fu, J. S., Calgano III, J. A., Davis, W.T, Boulet, J. A. M., & Wasserman, J. F. Improving Heavy-Duty Diesel Truck Ergonomics to Reduce Fatigue and Improve Driver Health and Performance. 2010.
General Accounting Office. Report to Program and Methodology Division, Quantitative Analysis: An Introduction. 1992.
Government of Canada. Data quality, concepts and methodology: response and non-response. Statistics Canada Technote. Statistics Canada website. 2013.
Green, P. E., & Blower, D. Evaluation of the CSA 2010 Operational Model Test. 2011.
Howarth, H., Alton, S., Arnopolskay, N., Barr, L. & Di Domenico, T. “Driver issues: Commercial motor vehicle safety literature review.” FMCSA Technical Report. 2007.
John A. Volpe National Transportation Systems Center. Safety Measurement System (SMS) Methodology Draft. 2007.
Kasunic, M. Carnegie Mellon Software Engineering Institute: Designing an Effective Survey. 2005.
Krejcie, R. & Morgan, D. “Determining sample size for research activities.” Educational and Psychological Measurement, 30: 1970. 607-610.
Mayr, S., Erdfelder, E, Buchner, A., & Faul, F. “A short tutorial of GPower.” Tuturials in Quantitative Methods for Psychology, 3(2): 2007. 51-59.
Morris, S. Top 5 ways your MCS-150 may be hurting your CSA score. North American Transportation Association Highway Highlights, 14: 2011. 16.
Monaco, K. & Williams, E. Accidents and hours-of-service violations among over-the-road drivers. Journal of the Transportation Research Forum, 40, 105-15.
Office of Management and Budget. Final Information Quality Bulletin for Peer Review. 2004.
Office of Management and Budget. Director’s Memorandum for Heads of Departments and Agencies with the subject line “Issuance of OMB’s Final Information Quality Bulletin for Peer Review. 2004.
Office of Management and Budget. Standards and Guidelines for Statistical Surveys. 2006.
Ottawa Minister of Industry. Statistics Canada Survey Methods and Practices. 2010.
Park, J. “What happens when the facts show the FMCSA goofed?” Truckinginfo.com. 2013.
Privacy Act of 1974, 88 Stat. 1896, 5 U.S.C. § 552a (1974).
Rodriguez, D., Targa, F., & Belzer, M. “Pay incentives and truck driver safety: A case study.” Cornell University ILR Review, 59.2: 2006.
Steffes, A., Regan, E. Smichenko, S., & Biton, A. Peer Review Process Guide: How to Get the Most Out of Your TMIP Peer Review. John A. Volpe National Transportation Systems Center. 2009.
Title V of the E-Government Act of 2002: Confidential Information and Statistical Efficiency Act , 116 Stat. 2899, 44 U.S.C. § 101 (2002).
US Department of Agriculture (USDA) website.
US Department of Transportation, Federal Motor Carrier Safety Administration. Efficacy of Web-Based Instruction to Provide Training on Federal Motor Carrier Safety Regulations. 2011.
US Department of Transportation, Federal Motor Carrier Safety Administration. The Impact of Driving, Non-Driving Work, and Rest Breaks on Driving Performance in Commercial Motor Vehicle Operation. 2011.
US Department of Transportation. Federal Motor Carrier Safety Administration: Weather and Climate Impacts on Commercial Motor Vehicle Safety. 2011.
Introduction
Street Legal Industries, Inc. (the research team), is conducting the Impact of Driver Compensation on Commercial Motor Vehicle Safety Survey (the study) under contract to the Federal Motor Carrier Safety Administration (FMCSA). The research team convened a peer review meeting on June 10, 2013, at the SLIND office in Oak Ridge, Tennessee. This report summarizes the proceedings and outcomes of the peer review meeting.
Purpose of the Peer Review
The study is covered by the Final Information Quality Bulletin for Peer Review issued by the Director of the Office of Management and Budget (OMB). As such, peer reviewers were tasked with scrutinizing, evaluating, and providing feedback on the team’s proposed methodology to improve the team’s approach.
Peer Review Meeting Preparation
The research team reviewed the Peer Review Process Guide: How to Get the Most Out of Your TMIP Peer Review (John A. Volpe National Transportation Systems Center, 2009) as a source document for planning the peer review meeting. A table in the guide, Tasks in the TMIP Peer Review Process, was used as a template for constructing a similar table for the current study that appears in the plan document. Tasks completed by the research team included the following:
Planned meeting logistics
Prepared the meeting agenda
Identified specific issues and questions for peer reviewers to address
Prepared and distributed background material to peer reviewers
Prepared a PowerPoint presentation of the study plan for use at the meeting
Conducted pre-meeting telephone calls with peer reviewers
In addition, FMCSA asked the research team to identify members of the peer review panel. Once peer reviewers were identified, their credentials were submitted to FMCSA. FMCSA reviewed the credentials and approved the members of the peer review panel.
Peer Reviewers
The research team identified three subject matter experts to recommend as peer reviewers: Dr. Wayne Davis, Dr. Donald Johnson, and Attorney Edward G. Phillips. FMCSA approved the research team’s recommendations.
Wayne T. Davis is the Dean of the College of Engineering at the University of Tennessee. Dr. Davis holds a B.A. in Physics from Pfeiffer College, a M.S. in Physics from Clemson University, a M.S. in Environmental Engineering, and a Ph.D. in Civil Engineering from the University of Tennessee. He is a member of a number of professional and honorary societies. Dr. Davis is certified as a Qualified Environmental Professional (QEP), accredited by the Council of Engineering and Scientific Specialty Boards (CESB), and holds certification of the Institute of Professional Environmental Practice (IPEP). He has been involved in a number of transportation research studies including one conducted for FMCSA, Improving Heavy Truck Ergonomics to Reduce Fatigue and Improve Driver Health and Performance. Dr. Davis acted as the Chair of the Peer Review panel.
Donald L. Johnson is the Technical Director, Data Systems and Assessment, at Oak Ridge Associated Universities. Dr. Johnson holds a B.S. in Economics from the University of Tennessee, Chattanooga and a M.A. and Ph.D. in Economics from the University of Tennessee, Knoxville. He has over 25 years of experience in survey research, computer-based analysis, and workforce assessments. Dr. Johnson conducts surveys of industry, academia, and government; coordinates the information processing efforts; and performs statistical analyses and presents the results. He designs web-based questionnaires for collecting survey data using on-line forms for many of the research projects he leads.
Edward G. Phillips received his B.A. from East Tennessee State University where he graduated with honors. Attorney Phillips also graduated with honors when he received his J.D. from the University of Tennessee. He is a partner at the Kramer Rayson law firm in Knoxville, Tennessee, and is a member of the Knoxville, Tennessee, and American Bar Associations, as well as the ABA Labor and Employment Law Sections. His honors and achievements include acting as the former Chair of the Labor and Employment Section of the Tennessee Bar Association, membership on the Editorial Review Board for the Tennessee Labor Letter, recognition as the leading employment lawyer in Tennessee Chambers and Partners USA Clients Guide (2004–2013) and listing in the Best Lawyers in America, Labor and Employment Law (1995‒2013).
Peer Review Meeting Logistics
The research team coordinated schedules with peer reviewers and FMCSA staff to schedule the meeting for June 10, 2013. The research team suggested that the meeting be conducted using webcast and conference call technology to minimize travel and expense. FMCSA agreed to approach.
The research team designed a PowerPoint presentation for use in the webcast. The presentation included a few slides introducing the peer reviewers, FMCSA staff, and research team members, as well as slides showing the agenda. The bulk of the slides provided a comprehensive description of the methodology the research team proposed to use to conduct the study. The team produced hard copies of the PowerPoint presentation, meeting agenda, and study plan and placed them in folders that were distributed to peer reviewers at the meeting.
The research team emailed non-disclosure agreement and conflict-of-interest forms, the project plan detailing the approach to the study, the draft survey questionnaire, and other background documents to the peer reviewers two weeks prior to the scheduled meeting. The research team also successfully tested the computer, webcast software, and audio/visual equipment that would be used several days prior to the meeting. All required forms were returned to the research team by peer reviewers in a timely fashion.
Peer reviewers were instructed to report to the Street Legal Industries office in Oak Ridge, Tennessee, on June 10, 2013, prior to the 9:00 AM Eastern Standard Time meeting start. The research team emailed detailed driving directions and contact information to the peer reviewers.
Peer Review Meeting Proceedings
The meeting took place on June 10, 2013, in three locations using webcast and conference call technology. FMCSA staff participating in the meeting included Teresa Hallquist, Martin Walker, and Janine Bonner. FMCSA staff participated via conference call from their Washington, DC, office. Because FMCSA staff would not be joining the meeting using webcast technology with video capabilities, they were sent copies of the PowerPoint presentation via email prior to the meeting.
The SLIND program manager, Scott Fillmon, joined the meeting using webcast technology from his office near Austin, Texas. Mr. Fillmon acted as the webcast administrator for the meeting.
Other SLIND staff included and peer reviewers used webcast technology to participate in the meeting from the SLIND office in Oak Ridge, Tennessee. SLIND staff participating from the Oak Ridge location included Patrick Bisese, SLIND Vice President and Director of Operations; Perry Jones, SLIND Project Manager; and Lou Rabinowitz, SLIND subcontractor and project statistical analysis subject matter expert. The peer reviewers participating in Oak Ridge were Dr. Wayne T. Davis, Dr. Donald L. Johnson, and Attorney Edward G. Phillips.
Mr. Fillmon started the meeting by introducing the research team and peer reviewers. Ms. Hallquist then introduced FMCSA staff participating in the meeting. After the introductions, Mr. Fillmon went over the meeting agenda. One of the agenda items called for the peer reviewers to meet independently to discuss the approach to the project, but those in attendance agreed that all participants should participate in that exercise. Mr. Fillmon then turned to Ms. Hallquist who provided a brief description of FMCSA and the purpose of the study.
After Ms. Hallquist’s presentation, Mr. Bisese provided a profile of Street Legal Industries including descriptions of representative clients, projects, and recognitions and Perry Jones discussed the project management plan stressing short-term deliverable dates.
After Mr. Fillmon provided an overview of the research plan, he turned the meeting over to Lou Rabinowitz to begin the discussion of plan specifics. Lou’s presentation began with a summary of the research team’s literature review and the study hypothesis. He then provided details about study methodology. This discussion reiterated information provided in the printed research plan.
After a short break, the meeting reconvened for participants to engage in open discussion. Dr. Davis began by stating that the discussion could be framed around three topics: (1) the approach to study, (2) the questionnaire to be used to collect survey data, and (3) Office of Management and Budget (OMB) approval of the study plan. He went on to ask several questions regarding the research team’s intention to stratify the study sample by decile rankings in the Safety Management System (SMS). He wondered whether that approach might result in a bias based on the size of companies that are ranked in the SMS. Further, he was concerned that the research team may be over-reaching by attempting to use rankings from two Behavior Analysis and Safety Improvement Categories (BASICs) categories. Specifically, he felt using two BASICs would result in an unnecessary level of study complexity. He went on to recommend that if the team decided to use the two BASICs approach that correlation of the BASICs be done to result in a combination of the two. An alternative would be to examine only one BASIC. Dr. Rabinowitz suggested that the research team work with FMCSA to either correlate the two BASICs or identify a single BASIC for the study. Ms. Hallquist concurred. After some more discussion, Mr. Fillmon and Dr. Davis concluded that, if only one BASIC is examined, it should be the “Unsafe Driving BASIC.”
Dr. Johnson questioned the value of using regions of the country to stratify data. Mr. Fillmon pointed out that the team had considered using the regional approach to stratification in early project planning, but no longer felt strongly in that regard. Ms. Hallquist stated that she was not too concerned about capturing regional differences in the data. However, she feels strongly that the research team should capture data about such variables as sizes of carriers and types of operation of carriers (such as whether they are short-haul or long-haul carriers).
The team agreed that the discussion had reached an appropriate point to break for lunch.
Dr. Davis began the afternoon session of the meeting with a discussion of the survey questionnaire. He and Attorney Phillips pointed out some minor typographic and grammatical errors that Mr. Fillmon noted for correction.
Attorney Phillips pointed out that the first item on the survey should be reworded to read “Are any of your company’s drivers part of a collective bargaining unit?” Further, they should respond yes or no and, if “yes,” what number of drivers? After some discussion, Attorney Phillips recommended just eliciting a yes or no answer and not attempting to gather further information about numbers covered by an agreement as that information would not add to the study.
The question of what would be included in the public data set that the research team is to provide FMCSA would include. Specifically, would the data set include personally identifiable information? Dr. Rabinowitz stated that the survey team understands that no personally identifiable information would be included. Further, no individual company’s data would be able to be identified by a simple review of the data set. He deferred to Ms. Hallquist who confirmed that the team’s assumption was correct.
Attorney Phillips suggested that possible responses to the item about drivers’ annual compensation be broken into a variety of categories such as $0‒25,000; $25,001‒50,000; $50,001‒75,000; and more than $75,000. After some further discussion, it was decided that categorizing compensation was the approach to use, but the research team would work with Ms. Hallquist to determine the appropriate ranges to use for categories.
After some discussion of item 4 and item 5 in the survey, the participants agreed that those items should be deleted.
The group realized that there was a numbering error on the questionnaire that the research team will correct.
Dr. Johnson suggested that many of the questionnaires he’s designed include a demographic section and wondered if such a section would be of value to the current study. Ms. Hallquist said that some demographic information may be of value but would want to know more information about which demographics to include. Attorney Phillips questioned the relevance of collecting data. Dr. Davis agreed that it would be of limited value in correlating pay methodology to safety.
Dr. Davis stated that he didn’t see any issues on the questionnaire that raised concerns about individually identifiable information. Dr. Johnson asked “Who owns the data?” Dr. Rabinowitz stated that FMCSA owns the data, but that no personally identifiable information would be provided in the public data set delivered by the research team to FMCSA. Ms. Hallquist concurred. Dr. Johnson asked who would have access to the final report. Dr. Rabinowitz stated that it would be in the public domain. Ms. Hallquist said that it would be available on the FMCSA website. However, Dr. Rabinowitz stated that the public data set would not be part of the report and it would only be available on request.
Dr. Johnson asked whether the study would be replicated in the future. Ms. Hallquist stated that the current plan does not include additional studies but that FMCSA might consider that possibility in the future.
Dr. Davis concluded the meeting by asking the group if anyone had more questions or issues to discuss.
Dr. Johnson stated that the research team should use caution when weighting samples. Specifically, he warned that the weights should not exceed the ratio of sample size to population. He used an example where, in a small field, one individual case might be weighted in such a way as to represent 10,000 units in the population. He then asked what the confidence level of the study was to be. Dr. Rabinowitz suggested that a 95% confidence level appeared to be the standard for similar studies. Ms. Hallquist agreed that the 95% confidence level was typical for FMCSA studies and was the appropriate level to use.
Mr. Fillmon reported that he did some research during lunch and found a focus group study done by Robert Carroll that defines short-haul operations as those limited to driving within a 100 mile or shorter range from the carrier’s home base (100 mile air range radius) and long-haul as those driving a range of greater than 100 miles from their home base.
Mr. Fillmon summarized the action items identified during the meeting including changes that will be made to the approach to the study methodology and changes that will be made to the survey questionnaire. In a response to a question by Dr. Rabinowitz about timing, Mr. Fillmon proposed delivering the modified study plan to the peer reviewers by Friday, June 14, 2013, for a second review.
Attorney Phillips suggested that the report of the peer review meeting along with a redlined version of the revised project plan and questionnaire be provided to the peer reviewers. Mr. Fillmon agreed to that request and said that the documents would be delivered in electronic Microsoft Word versions. He requested the peer reviewers complete the second reviews and return their documents to the research team by Wednesday, June 19, 2013.
Post-Meeting Discussion with FMCSA
A follow-up conference call between the research team and FMCSA took place on July 3, 2013. FMCSA staff expressed their concern about the research team’s approach to sampling. Specifically, they questioned whether a study frame consisting of only those carriers with enough violations or crashes that are assigned percentile(s) in the SMS (92,000 carriers) is sufficient. They feel that that data set is too restrictive and may not yield meaningful numbers of carriers that use methods other than pay-per-mile to compensate drivers. Further, FMCSA feels that driver out-of-service rates, motor vehicle declared out-of-service rates, and crash rates might be more appropriate variables to examine as part of the study than ratings in the SMS.
After some discussion, it was agreed that the research team will use a more inclusive group of carriers than just the category of carriers with enough violations or crashes that are assigned percentile(s) in the SMS. However, FMCSA agreed that doing additional analysis of the aforementioned 92,000 carriers with enough violations or crashes that are assigned percentile(s) in the SMS might be of value. Ms. Hallquist, Mr. Fillmon, and Dr. Rabinowitz agreed that it would be best for the research team to revise their plan for defining the sample frame, selecting the survey sample, and collecting data and submit it for review and approval to FMCSA prior to completing and distributing this report to the peer reviewers. Mr. Fillmon and Dr. Rabinowitz agreed to submit their revised sampling plan to Ms. Hallquist on Wednesday, July 10, 2013.
Subsequently, the research team reviewed information provided by FMCSA and has recommended that the survey frame consist of the 201,000 carriers with sufficient data to be assessed in the SMS. This category includes the 92,000 carriers with enough violations or crashes that are assigned percentile(s) in the SMS. This approach is detailed in the revised plan provided by the research team to FMCSA.
Subsequent Review of Revised Plan
The project plan has undergone several revisions as a result of feedback received from FMCSA. The research team distributed the final revision (which had previously been submitted to the institutional review board) to the peer reviewers for their review and comment. Each member of the peer review committee reviewed the final revision of the plan and recommended that appropriate application documents be submitted for OMB review once the plan has been approved by the institutional review board (IRB). Specific comments from peer reviewers include the following:
Dr. Wayne Davis, Chair of the Peer Review Committee commented that the latest iteration of the plan appeared sound. He also indicated that he agreed with using the more than 730,000 carrier companies listed in the Motor Carrier Safety Management Information System (MCMIS) rather than the more limited number of carriers listed in the Safety Management System (SMS) database which will “help eliminate uncertainty” by providing a larger group of potential survey participants. This comment is consistent with the FMCSA concern that using the SMS as the sample frame might introduce bias.
Dr. Donald Johnson did not offer specific comments on the changes to the plan, but did express that he felt the research team should move forward with submission of application for OMB review.
After reviewing the final revision of the plan, Attorney Edward Phillips and the research team’s principal investigator discussed the change to the MCMIS database as the sample frame, the approach to collecting data about unsafe driving violations and crashes via items on the revised survey questionnaire, and the approach to statistical analysis in a telephone conversation. Consistent with the recommendations of the other two peer reviewers, Attorney Phillips recommended that the research team submit the OMB application materials.
A limited pilot study of six (6) trucking companies was done by the research team during the month of September 2013 to test the effectiveness of the study approach and the validity of the survey questions detailed in this document.
The research team extracted a list of non-passenger motor carriers located within a 200-mile radius of Oak Ridge, Tennessee from the MCMIS database. The list was pared down to include only those with current MCS-150 information. The list was grouped by peer group size and sorted by number of drivers. The intent of the team was to interview at least one individual from a commercial motor vehicle carrier from each of the peer groups, up to a total of nine individuals. The research team contacted carriers by phone and email until one carrier in each peer group would agree to participate in the pilot study. The study included a total of 5 carriers.
FMCSA Commercial Motor Vehicle Carrier Peer Group Categories |
||
Peer Group |
No. of Drivers |
Carriers Interviewed |
Very Small |
1-5 |
2 |
Small |
6-50 |
1 |
Medium |
51-500 |
1 |
Large |
>500 |
1 |
The interviews with the carriers from the large and medium peer groups and one from the very small peer group were conducted in person at the carriers’ locations. The interviews with the remaining carriers from the small and very small peer groups were conducted by telephone.
Each of the pilot interview respondents were Safety Directors. In one case, the carrier’s recruiting manager also participated in part of the interview. Respondents were asked to provide feedback about each individual survey item. Each of the interviews took one hour or less to complete. Based on the interviewers’ pilot experience and feedback from respondents, it is anticipated that actual initial interviews focusing on carrier data will take less than thirty (30) minutes to complete.
All of carrier participants in the pilot interviews were open and friendly and indicated that they would have no trouble responding to items on the initial survey. They stated that they could respond to most of the items on the initial survey questionnaire. These items include the following:
Method drivers in the company are paid
The estimate of percent of drivers paid by various methods
Whether drivers are paid an overtime rate and the bases for overtime pay
Whether drivers are paid for time beyond regular pay (e.g., sleeper berth time)
The benefits provided to drivers
Whether the company has changed the way it pays drivers within the past five years and why such a change has taken place
The type of operation (for-hire or private and truckload, less-than-truckload, regional, tanker, or other)
Whether driver(s) are owner operators
Whether drivers are contracted to the CMV carrier being discussed or with other companies
The number of power units in the fleet
Primary commodities hauled
Estimate of the average age of drivers and how many regular full-time drivers work for the company.
Items that respondents couldn’t immediately respond to and would require further research included the following:
Average annual total compensation paid to full-time drivers
Average length of haul for drivers
The typical number of years driving experience of drivers
When respondents were told that companies that pay drivers using more than one method would be queried about individual drivers, they expressed concern about sharing personally identifiable information. One respondent flatly stated that his company would be unwilling to share information about individual drivers. Another respondent said he would have to check with his company’s legal counsel before sharing individual driver information. The interviewers requested that that participant consult with his company’s legal counsel and let them know the result of that consultation. Subsequently, the respondent reported that his legal counsel’s opinion was that the company would not share individual driver information.
Several main points were derived from the pilot study that influence the research approach: direct all communication to the correct participant by formal name and title to ensure proper attention will be given to the survey; be considerate of the participants’ time and provide flexible response windows; and make it clear FMCSA is sponsoring this study. The participants also indicated that they had a reluctance to respond based on past involvement with FMCSA experiences and requests for voluntary participation.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Loubikes |
File Modified | 0000-00-00 |
File Created | 2021-01-25 |