drfEDD PeerStudy Part B jam 03_06_2014

drfEDD PeerStudy Part B jam 03_06_2014.doc

Electronic Device Distraction(EDD):Test of Peer to Peer Intervention Combined with Social Marketing

OMB: 2130-0603

Document [doc]
Download: doc | pdf


INFORMATION COLLECTION

FEDERAL RAILROAD ADMINISTRATION

SUPPORTING JUSTIFICATION

ELECTRONIC DEVICE DISTRACTION: TEST OF PEER TO PEER INTERVENION

COMBINED WITH SOCIAL MARKETING; OMB No. 2130-NEW

SUPPORTING JUSTIFICATION – Part B


  1. Description of sampling method to be used.


We need a relatively small number of respondents to answer questions about how they see the program working and what would make it work better. What is important is to make sure that the respondents are knowledgeable about the stakeholder group from which they are drawn. We are basing our selection process, time estimates, and question content on our extensive experience evaluating the Confidential Close Call Reporting System at the Union Pacific Railroad, Amtrak, the Canadian Pacific Railroad, and New Jersey Transit. Parallel questions have been addressed to the DOT’s implementation team and to relevant FRA officials. In all cases, we have found that 30 minutes (at the outside) is sufficient time to collect the data we need. In our experience, almost no one refuses to be interviewed. We are dealing with people who are invested in the program and interested in it. They are almost always anxious to share their experiences and impressions. Three respondent groups will be included in this study. Numbers of each and time demands are shown in the following table.


Respondent group

# of respondents

Time per response (hours)

Total Annual Burden Hours

Pilot site personnel

50

0.5

25

Norfolk Southern personnel involved in implementing and managing the pilot.

15

0.5

7.5

Project team members

15

0.5

7.5


2. Description of procedures for information collection, including statistical methodology for stratification and sample selection.


With respect to railroad personnel, the procedure for selecting respondents is to work with the management and labor who are involved in implementing the program to identify people who represent their stakeholder group and can speak articulately about their experiences. We also ask that as people are identified, some skeptics of the program are included. With respect to the outside consultants on the implementation team, there will only be two or three of them, all of whom are known to us. As shown in the table above, based on our experience with similar programs, we expect each interview to take no more than 30 minutes, or less.

We realize that the method of recruiting respondents is likely to skew the sample in the direction of people who think well of the program. Thus it would be inappropriate to rely on this data for any kind of objective estimate of effectiveness or efficiency. What we will get is qualitative information about the perceived effectiveness of the program, as judged by personnel with an interest in the activities and success of the program. Our experience with the evaluation of the Confidential Close Call Reporting System is that such opinion is relevant and useful for understanding how programs like this operate, and how they might be improved. Also, while it is true that the procedure will attract those who think well of the program, our experience is that it most certainly does bring in a fair share of vocal and articulate skeptics.


Estimation Procedure


The estimation procedure is explained in the paragraph above.


Degree of Accuracy Needed for the Purpose Described in the Justification


Because we will be conducting interviews, “accuracy” in the statistical sense is not a relevant concept here. What is important is to have good “coverage” within each stakeholder group. The table in question #1 (Part B) shows a plan to obtain this coverage by interviewing multiple respondents within each group.


In terms of project leadership and corporate oversight, there are no more than 10 Norfolk Southern (NS) personnel. Thus, the estimate of 15 interviews allows for a 100% sampling, plus multiple interviews with some people. 100% is needed because these are corporate officials who have different responsibilities with respect to peer to peer programs and electronic device distraction. (For instance, people in the Legal department and people in the Safety department will have very different opinions as to what constitutes a successful program). The same reasoning applies to the project team members.


With respect to the workforce, the test site has approximate 450 employees. (The number fluctuates over time). Until the project is implemented, it will not be known how many people will actually be involved in conducting peer to peer activity. However, based on our experience in other railroads and conversations with the implementation consultants, “50” is a reasonable estimate, accounting for the need to interview some people several times.


Unusual Problems Requiring Specialized Sampling Procedures, and


There are no such problems.



Any Use of Periodic (Less Frequent than Annual) Data Collection Cycles to Reduce Burden


All data collection will take place within the one year timeframe of the project.


(Note: Depending on how the program is implemented, the time may be extended a few months, but this determination cannot be made until the project is under way.)


3. Description of methods to maximize response rate and to deal with non-respone issues.


The data collection being requested involves collecting qualitative data as part of an in-depth case study methodology. This data will be combined with quantitative data on operations that will be provided by NS. Sampling will be aimed at drawing data from “key informants” who are judged as such by the peer to peer implementation team and other close observers of the scene. Our experience doing these kinds of interviews for the evaluation of the Confidential Close Call Reporting program indicates that almost nobody will refuse to be interviewed. The qualitative analogue of “accuracy” and “reliability” will be tested by assessing consistency within interviews and across similar respondents (although caution is needed with the “across interview” comparisons to allow for the likelihood that different respondents will have different opinions). With respect to “generalizability”, we propose to treat this construct as is often done in qualitative case studies, i.e. to assure that there is a reasonable spread of people with respect to important factors such as work duties and seniority.


In order to maximize the likelihood of respondents agreeing to be interviewed, we propose to begin each interview with an introduction that is modeled after the introductory statement we use for the evaluation of the Confidential Close Call Reporting System, and which has been approved by the OMB. This statement will also serve the purpose of providing the respondent with knowledge of the length of the interview. The statement will read as follows:


The objective of this interview is to develop knowledge about how effective peer to peer programs can be developed to combat electronic device distraction. To protect privacy we are not recording any names. All we need is a general description of each respondent, such as: “BLET member, more than 10 years’ experience.” In addition, no quotations will be reported that might reveal anyone’s identity. The interview will last about half an hour. Thanks for being willing to help us.


4. Describe any test procedures for procedures or methods to be undertaken.


Prior to full scale deployment, interview questions will be tested in two ways. First, members of the peer to peer implementation team will provide input and feedback as to what questions would be of use in guiding their actions. Second, prior to full scale deployment, the interview protocol will be tested on a few people from each respondent group and adjusted as necessary.


5. Provide name and phone number of individuals consulted on statistical aspects of study design and other persons who will collect/analyze information for agency.


As explained earlier, “statistics” as the word is generally understood is not relevant here. This is because the method being used is “key informant” interviewing.


Data will be collected by Dr. Jonathan Morell, Director of Evaluation at the Fulcrum Corporation and the lead evaluator on this project. Dr. Morell can be reached at the following number and e-mail address: 734 646-8622, or [email protected].


4


File Typeapplication/msword
AuthorUSDOT User
Last Modified ByUSDOT User
File Modified2014-03-11
File Created2014-03-06

© 2024 OMB.report | Privacy Policy