revised ss npdb eval[1]

revised ss npdb eval[1].doc

National Practitioner Data Bank and Healthcare Integrity and Protection Data Bank Market Survey and Survey of Use of Data Bank Information

OMB: 0915-0316

Document [doc]
Download: doc | pdf

Supporting Statement

National Practitioner Data Bank and

Healthcare Integrity and Protection Data Bank

Market Surveys and Survey of Use of Data Bank Information by Queriers



A. Justification


1. Circumstances of Information Collection


This is a request to conduct a new survey of entities that have the legal obligation or entitlement to query and/or report to the National Practitioner Data Bank (NPDB) and/or the Healthcare Integrity and Protection Data Bank (HIPDB). These users include medical malpractice payers, professional societies, hospitals, and other health care providers such as Health Maintenance Organizations.


In 1986, Congress enacted Public Law 99-660, the Health Care Quality Improvement Act 1986 (HCQIA). Title IV of the act mandates the creation of the NPDB, which was designed to support, encourage, and stimulate peer review and make it difficult for health care practitioners with bad records to move from State to State or facility to facility without discovery of their record. The Act directs the Secretary to establish a National Data Bank to receive and disseminate information on certain adverse actions taken against licensed health practitioners. This project is proposed pursuant to 42 U.S.C. 11137 as an evaluation of the manner in which users and others are served by the Data Bank.


The purpose of the NPDB is to improve quality of health care by encouraging the health care system to identify and discipline those who engage in unprofessional behavior. The information contained in the NPDB constitutes a “flagging” or “alert system” for use in guiding discrete inquiry into and scrutiny of specific areas of practitioners’ licensure, professional society memberships, malpractice payment history, and record of clinical privileges.


Insurance companies and other entities must report to the NPDB any payment that is made for licensed health care practitioners in relation to medical malpractice actions or claims. State medical and dental boards must report to the NPDB disciplinary actions taken against physicians and dentists. Health care entities such as hospitals must report peer review decisions which adversely affect, for more than 30 days, the clinical privileges of physicians or dentists. Professional societies must report peer review adverse actions regarding membership of physicians and dentists. Hospitals are required to query the NPDB every two years concerning members of their medical staff. They are also required to query concerning new applicants for clinical privileges or staff membership. Health care entities including managed care organizations, professional societies state medical and dental boards may query the NPDB. Individual health care practitioners may perform self-queries.


The Health Insurance Portability and Accountability Act of 1996 (HIPAA), Public Law 104-191 enacted in 1996 (Section 1128E of the Social Security Act)requires the DHHS (Secretary), acting through the Office of Inspector General (OIG) of DHHS and the United States Attorney General, to create a national health care fraud and abuse control program. Among the major components of the program is the establishment of a national data bank to receive and disclose certain financial adverse actions against health care practitioners, providers, and suppliers. This data bank is known as the Healthcare Integrity and Protection Data Bank (HIPDB).


The purpose of the HIPDB is to combat fraud and abuse in health insurance and health care delivery and to promote quality health care. The HIPDB is primarily a flagging system that may serve to alert users that a more comprehensive review of a practitioner, provider, or supplier’s past actions may be prudent. HIPDB information is intended to be used in combination with information from other sources (e.g., evidence of current competence through peer review or continuous quality improvement studies, peer recommendations, verification of training and experience, and relationships with organizations) in making determinations on employment, affiliation, certification, and licensure decisions.


Federal and State Government agencies and health plans are eligible to query and are required to report to the HIPDB. Health care practitioners, providers, and suppliers may self-query the HIPDB.


The Practitioner Data Base Branch (PDBB) conducts studies to ensure that the NPDB and the HIPDB are meeting the intent of the HCQIA and the HIPAA and are serving its customers in the best way possible.


In addition, PDBB has commissioned a series of studies by both the OIG and contractors and the Department of Health and Human Services (DHHS) Officer of Inspector General (OIG). Issued examined include the quantity and quality of information provided by the NPDB, user satisfaction with the information received from the NPDB, the process by which users interact with the NPDB, and how NPDB information affects decision making.


In 1998, the DHHS contracted with the Institute for Health Services Research and Policy Studies, Northwestern University and the Health Policy Center, Survey Research Laboratory, University of Illinois Chicago to complete a survey of NPDB users and non-users. The survey was completed in 2001. The study revisited many of the areas of customer satisfaction first explored in a 1994 study commissioned by HRSA with Walcoff and Associates. The proposed survey will update the results of the Northwestern University – University of Illinois at Chicago survey.


Survey information concerning user satisfaction with the NPDB and HIPDB is particularly necessary because the NPDB and HIPDB are funded exclusively through user fees.* The HIPDB receive a minor supplement from Health Care Fraud and Abuse which partly offsets the costs of free queries for federal agencies.


The survey is crucial for the Data Bank’s PART review for sections 2.6: “Are independent evaluations of sufficient scope and quality conducted on a regular basis or as needed to support program improvements and evaluate effectiveness and relevance to the problem, interest, or need?”; and 4.5: “Do independent evaluations of sufficient scope and quality indicate that the program is effective and achieving results?”.   Without these survey data, the program will not get PART credit for these items. The survey is specifically mentioned in sections 1.2, 1.4, 4.1, and 4.2, as well as in the Program Performance Measures section of the PART review (PART review is attached).


2. Purpose and Use of Information


The focus of this evaluation is to answer questions regarding the use of the NPDB and HIPDB by the entities required or eligible to query and/or report to the Data Banks. The study will be implemented under contract to The Gallup Organization using the methodologies, sampling framework, and analytic design which are described in the subsequent sections of this clearance request. Gallup has developed survey instruments to evaluate the quality and assess the impact of the NPDB and HIPDB for entities that query and report to the one or both databanks. Additionally, non-users will be surveyed to determine reasons for non-use and potential strategies to increase participation among entities in this group.


One purpose of the proposed project is to conduct a follow up study of NPDB users and non-users to respond to information needs in several broad areas. Another purpose is to gather parallel information from users of HIPDB as well as non-users of HIPDB. When the previous study was conducted, the HIPDB had only been open for a short period of time and a thorough evaluation was not yet possible. An analysis of user satisfaction will allow for the development of strategies to improve the user experience as well as impact of the databases, consequently increasing participation. More specific uses of the collected information include:


  1. Determine user satisfaction with the information provided by the NPDB and HIPDB, e.g., are users satisfied with the type, quality, and quantity, of information they receive.

  2. Determine user satisfaction with the new electronic processes used to query and report to the NPDB and HIPDB as well as collect suggestions for ways to improve these processes. These electronic processes have been implemented and/ or improved since the previous study was commissioned.

  3. Determine user satisfaction with the ease of reporting to the NPDB as well as collect suggestions on ways to simplify the reporting process while maximizing the quality of information collected.

  4. Identify and assess new products and services, including new types of reports that would make query results more useful and/or expand the NPDB or HIPDB voluntary customer base.

  5. Evaluate the effectiveness of NPDB and HIPDB educational and outreach programs such as the Help Line, the NPDB-HIPDB newsletter, and the NPDB-HIPDB, and Practitioner Data Banks Branch web sites.

  6. Determine what factors distinguish entities that register and utilize the NPDB or HIPDB, while other eligible entities do not register or use the databases.

  7. Determine the impact of information generated from queries that result in a “match” with information contained in the databases.


3. Use of Improved Information Technology


The study design being implemented will be least burdensome for the respondents, and takes advantage of improvements in data collection technology since the fielding of the 2001 NPDB survey. Primary data collection for the user survey will utilize web technology. Because users access the NPDB and HIPDB via the internet, a web methodology will likely be the preferred methodology for respondents.


Eligible respondents for the user surveys will first be contacted in a brief telephone recruit call administered using CATI (computer assisted telephone interviewing) technology. The purpose of this call is to verify eligibility and gain cooperation from the respondent. Once qualified, each respondent will receive instructions that will direct them to a secure internet web site where they will complete the survey.


Utilizing web technology easily allows for complex skip patterns and references to earlier respondent answers. This makes the survey more efficient for respondents. Using web technology allows a respondent to complete the survey at any time as the secure web site hosting the survey is accessible 24 hours a day. Additionally, respondents may stop the survey if necessary and return to the secure web site whenever they would like without having to start the survey over—so respondents can divide the time it takes to complete the survey into increments of their choosing. Ninety-five percent of all user surveys are expected to be completed via the web while the CATI option will be available for the remaining 5% of respondents that for whatever reason do not complete the survey via the web.


The non-user surveys will also take advantage of CATI technology for the recruit portion of the data collection process. Once non-users have been qualified, a live interviewer will switch the respondent over to an interactive voice recognition version of the survey. The CATI to IVR process has several major benefits: (1) reducing the respondent burden by automating interviewer instructions and skip logic, so that the interviews progress quickly and smoothly from question to question; (2) minimizing interviewer error through control over questionnaire logic, consistency checks, and probes; (3) eliminating the need to call back respondents to obtain missing data since errors and inconsistencies are corrected during the interview process; and (4) greatly reducing the data editing tasks post-collection, through the use of soft and hard edits and consistency checks.

4. Efforts to Identify Duplication


This survey seeks to obtain information unavailable through existing sources. The results of the 2001 survey will be used to the extent possible for comparison with current results.


5. Involvement of Small Entities


We expect that entities which use the databanks less frequently than others will have different suggestions, priorities, and perspectives. In order to determine the differences, the same survey instrument will generally be used for large and small entities. Small entities such as hospitals with a very small number of patient beds may query less frequently and receive fewer matched responses than larger—allowing for fewer questions regarding their satisfaction with matched response reports.


6. Consequences if Information Collected Less Frequently


This study collects data only once in order to compare and evaluate current levels of satisfaction longitudinally with those found in the 2001 survey. If the proposed survey is not conducted, we will be required to continue to rely on 2001 data for budget and PART review purposes.


7. Consistency with Guidelines in 5 CFR 1320.5(d)(2)


There are no special circumstances relevant to this project.


8. Consultation Outside of the Agency


The 60-day notice of this proposed information collection was published in the Federal Register on February 21, 2007 (Vol. 72, No. 38, pages 7892-7893). No comments were received.


9. Remuneration of Respondents


There will be no remuneration of respondents or entities participating in the survey.


10. Assurance of Confidentiality

Gallup has in place a comprehensive system of interlocking procedures to protect the privacy and anonymity of respondents both during data collection and processing and after the conclusion of the survey. If necessary, the features of our system can be specially tailored to meet any and all requirements required by NPDB-HIPDB.

Respondents will be informed that their answers will be kept private to the extent allowed by law. Survey respondents will be informed that all identifying information will be separated from their responses and that provisions have been made to maintain the security of all data. They also will be informed that participation in the survey is voluntary, and assured that if they decide not to take part in the survey, or choose not to respond to any particular item in the questionnaire, no adverse action will result. These assurances will be included in the relevant survey materials such as the cover letter to the mail questionnaire, and telephone interviewers will be required to communicate these assurances to each respondent before beginning the interview. Finally, respondents who request access to their records will be given such access and will be permitted to amend their responses if they so desire.

Once data are collected, mail questionnaires and related survey documents that contain personal identifying information will be maintained in a separate file or room, access to which will be restricted to authorized project staff. Individual identifying information about a respondent will be used only to facilitate the execution of the study, such as collecting data, verifying sample identity, authenticating data collections, obtaining and editing missing data through retrieval, and matching new data with old.

Gallup enforces strict procedures for maintaining confidentiality and data security among all project staff and organizational personnel. These procedures will apply to virtually every aspect of the study and include, but not be limited to: sample generation; receipt control and data collection; coding and editing; data processing; data analysis; and all ancillary activities that may require the maintenance of follow-up information, which physically separate the identifying information required for follow-up from the data required for research.

Gallup’s confidentiality pledge, document security, and other protections

Gallup personnel are required to sign a “Confidentiality Pledge” to respect, maintain, and protect the confidentiality and (ASA), and other professional associations and societies involved in the design and conduct anonymity of all survey respondents, and not to disclose any information that might identify an individual respondent or information about him or her. Gallup’s standards for maintaining respondent confidentiality and anonymity are in keeping with the Code of Professional Ethics and Practices of the American Association for Public Opinion Research (AAPOR), Council of American Survey Research Organizations (CASRO), the American Statistical Association of survey, statistical, and public opinion research. Gallup’s personnel are trained to prohibit and effectively prevent any person other than authorized project team members from seeing or having access to the acquired survey information while it is in the possession or under the control of Gallup. Personnel are also trained to prohibit and effectively prevent any project team member from disclosing the contents or description of the documents or information to any person not authorized as part of a contract to have access to such documents or information. Personnel are also trained to prohibit and prevent the removal of any of the acquired documents or information in any form from Gallup’s premises without authorization. The awareness and training also extends to ensuring that data are stored, maintained, and managed in accordance with the contractor’s security requirements. Staff who fail to abide by these rules are subject to disciplinary action, up to and including termination of employment and denial of further access and cooperation and legal action for any transgression by an offender. Similar standards and policies apply to visitors, contractors and consultants, and other non-employees. In addition, Gallup’s main production facilities are restricted areas protected by a security and surveillance systems. Special procedures for some surveys require that documents be used in locked rooms and/or locked in filing cabinets after use, with access to these data restricted to authorized personnel only. Release of survey or related information other than that found in public use files is corporately controlled and subject to careful review. Gallup never releases respondent names or other identifying information to clients or other requesters unless such release is specified by contract, in which case this and all potential uses of such data are included as a component of informed consent. Upon completion of the project, all materials determined to be pertinent will be submitted to the agency or disposed of, in accordance with instructions from the agency.

Staff awareness and training about confidentiality

All Gallup project staff are aware of and trained in the confidentiality procedures and protections to be enforced, and how these affect the type of work in which they are involved (e.g., interviewing, coding/editing, data processing, etc.) in a survey. However, because telephone interviewers and supervisors directly interact with respondents, they receive more specialized training, including training in confidentiality policies and regulations for the agency or as legislated for a specific study. Gallup underscores the importance of this issue by devoting one or more training topics to confidentiality. In addition, the written materials typically include a chapter in the interviewer and supervisor’s manual which is devoted exclusively to the issues of confidentiality, privacy, anonymity, informed consent and related topics. This high level of awareness and training among our staff ensures the concrete application of the confidentiality at all levels and in all survey operations. Gallup has been a leader in developing confidentiality protections, and has maintained a sterling record of protecting the confidentiality and data of its survey respondents for both private and public clients for more than 70 years.

11. Questions of a Sensitive Nature

No questions of a sensitive nature will be asked.


12. Estimates of Annualized Hour Burden


Respondents


Respondent Description

Number of Respondents

Responses Per Respondent


Total Responses

Hours per Response

Total Burden (Hours)

NPDB Users Group Survey

Malpractice Payers

228

1

228

.25

57

Licensing Boards

90

1

90

.25

22

Hospitals (Reporting)

466

1

466

.25

116

Hospitals (Querying)

994

1

994

.25

248

MCOs

900

1

900

.25

225

Other HCEs (Reporting)

57

1

57

.25

14

Other HCEs (Querying)

976

1

976

.25

244


HIPDB Users Group Survey

Licensing Boards

231

1

231

.25

57

Government Hospitals

390

1

390

.25

97

MCOs

580

1

580

.25

145

Other HCEs

260

1

260

.25

65

NPDB Matched Response Survey

Licensing Boards

55

3

165

.1

16

Hospitals

984

3

2952

.1

295

MCOs

848

3

2544

.1

254

Other HCE’s

904

3

2712

.1

270

HIPDB Matched Response Survey

Licensing Boards

43

3

129

.1

12

Hospitals

202

3

606

.1

60

MCOs

432

3

1296

.1

129

Other HCEs

87

3

261

.1

26

NPDB Non-User Survey

Licensing Boards

213

1

213

.16

34

MCOs

341

1

341

.16

54

Other HCEs

881

1

881

.16

141

HIPDB Non-User Survey

Licensing Boards

30

1

30

.16

4

MCOs

411

1

411

.16

76

Other HCEs

974

1

974

.16

155

Total


11,577


18,687


2817




13. Estimate of Annualized Cost Burden To Respondents

There are no capital or start up costs or operation and maintenance costs associated with this data collection for respondents.


14. Estimate of Annualized Cost to the Federal Government


HRSA has contracted with The Gallup Organization for development of the evaluation design and development of instruments, data collection, analysis, and reporting. The total cost for the services of The Gallup Organization will be $1,263,964.


  1. Change in Burden


This is a new data collection.


16. Publication of Results of Data Collection, Time Schedules, and Analysis Plans


Publication of Results

The contractor will write a final report based upon the requirements of the contract. The report will contain a background section and scope, design, and methodology sections. The main body of the report will describe and interpret the key findings, which will include final analytical tables. The final section of the report will have a conclusion and report recommendations based on the survey outcomes.


The study report will be available upon request. In addition, the study results will be reported to all institutions that request a copy.


Survey Schedule-User Survey


Activity

Dates

Eligibility screening telephone call into entities

Following OMB approval

Web survey invitation emailed to telephone screened respondents

24 hours after each eligibility screening telephone call is completed

Email reminder #1

10 days after each respective original email survey invitation sent

Email reminder #2

21 days after original email survey sent

CATI prompt telephone call into entities that have received web survey invitations but still have not completed the survey

30 days after original email survey sent

Email reminder #3

50 days after original email survey sent

CATI prompt #2

60 days after original email survey sent

First class mail prompts for difficult to reach respondents

65 days after original email survey sent

Email reminder #4

70 days after original email survey sent

CATI for non-respondents

80+ days after original email survey sent; last chance to complete survey

USPS Priority/FedEx prompt for refusals and difficult to reach respondents

80+ days after original email survey sent

Submit a draft final report

June 2008

Submit revised final report

August 2008

Submit draft article #1 suitable for publication discussing use and usefulness of NPDB

October 2008

Submit draft article #2 suitable for publication discussing use and usefulness of HIPDB

October 2008



Survey Schedule-Non User Survey

A team of trained executive telephone interviewers will contact respondents identified as eligible non-users. After the brief telephone call, interviewers will gain cooperation of respondents, and transfer them over to an automated IVR system. The non-respondent sample will follow a multi-call design running concurrently with the user phone screening procedure detailed above.



Data Analysis Plan


The primary purposes of the data analysis are:


1. To assess the overall satisfaction of NPDB and HIPDB users with the reporting and querying processes, methods for improving these processes, and user perception of the usefulness of the information in licensing and credentialing decisions.


2. To determine why eligible institutions of the NPDB and HIPDB did not use the databanks and how they believe that the processes could be improved, and what the perceived usefulness of the information might be.


As HIPDB querying started in March of 2000, this will be the first thorough analysis done of HIPDB users. Overall, the analysis will center on the goal of making the two databanks more useful, effective, and influential on decisions made by hospitals, managed care organizations and other entities.


Tabulations


The analysis of the data will involve preparing descriptive statistics (e.g. means, medians, frequency distributions and cross-tabulations) to describe the characteristics of, satisfaction with, and usage of databank clients. All of the closed-ended responses will be reported in tabular format to provide a quick view of the study results and comparisons across entity types. Three sets of tables will be prepared including:


  1. User tables – Tables providing entity level data (for both queriers and reporters from the NPDB and HIPDB) on areas such as general satisfaction, specific areas of satisfaction, usefulness of databank information, and bench marking information such as time taken to query or produce reports. The analysis of the user tables will enable us to determine how each databank is currently meeting the needs of its users and how each can best meet the needs of its future users.


  1. Match level data tables – Tables providing match level data on areas such as actions taken in response to reports and completeness and usefulness of information in reports (for users who received a match response from either the NPDB and/or HIPDB). The goal of the match response analysis is to learn, in more detail, about the specific impact of a particular match report on the decision making of various entities. Special emphasis will be placed on the impact of the type of information returned on a specific matched response, whether it was new information or a confirmation of existing information, and how it is used in decision making. This analysis is critical to assessing the impact of the respective databases on the decision making process.


  1. Non-user tables – Tables providing entity level data on the reasons for non-use, including knowledge of the databanks, and current methods for licensing and credentialing (for both the NPDB and HIPDB non-users). The main purpose of the analysis of non-users is to determine the reasons for non-use, how they believe the processes can be improved and what the perceived usefulness of the information might be.


These tabulations will require proper weighting of the data to produce estimates that reflect users, match responses and non-users nation wide. In this project, the planned analyses require that four different sets of weights be computed to produce representative statistics for the two different types of users (queriers and reporters), the non-users, as well as the match responses. We will use the first three types of weights to prepare entity based estimates for the populations of queriers, reporters and non-users, such as the percentage of users that were satisfied with the querying or reporting processes, or the percentage of non-users who were aware of the data banks. We will use the last type of weight to produce statistics to reflect the population of match responses, such as the percentage of match responses that yielded useful information. Standard errors that reflect the complex sample design will also be estimated for key statistics. We plan to conduct the data analysis using the SUDAAN software package in conjunction with SAS to account for the sample design process.


Comparisons



Comparisons will be made across entity types, user groups, and time. Specifically, we will compare:



  1. Inter-Organizational Comparisons: We will compare responses (for both reporters and queriers) to entity-level and match-level questions across the five different entity types, including medical malpractice payers, State licensing boards, hospitals, managed care organizations, and other health care entities. Univariate comparison methods will be used to make comparisons, including differences of means tests (t-tests), chi-square tests, and Wilcoxon Rank Sum tests (non-parametric) as needed.

  1. Longitudinal Comparisons: The analysis will include a comparison to three sets of data including:


    1. Entity level baseline data collected in 2000 for the NPDB user and non-user surveys (University of Illinois at Chicago and Northwestern University, 2001) and to the 1994 study conducted by Walcoff and Associates in the areas of general satisfaction, specific areas of satisfaction, general usefulness of types of information disclosed, and bench marking information such as time taken to query or produce reports.


    1. Data collected in the 2002 HRSA HIPDB ACSI customer satisfaction survey, which was the first to examine satisfaction of users with the HIPDB, and

    1. Match response level data collected by the Office of Inspector General (OIG) in the February 1993 reports titled “Usefulness and Impact of the NPDB Reports to Hospitals” and “Usefulness and Impact of NPDB Reports to State Licensing Boards” and in the 1995 OIG reports titled “NNPDB Reports to Hospitals: Their Usefulness and Impact” and “NPDB Reports to MCOs: Their Usefulness and Impact”, in the areas of timeliness of responses from the NPDB, actions taken based on responses from the NPDB, completeness of the disclosure information and usefulness of the information. These OIG reports studied the utilization and impact of the NPDB on managed care entities and hospitals during its first four years of operation.


  1. User/non-user Comparisons: Three sets of analysis will be used to compare users to non-users including:



    1. A comparison of survey responses. There are a limited set of questions that are common to both the user and the non-user surveys including the sources of information used in the credentialing process, contracting with an outside agency, importance of “scenario” information in licensing/credentialing process. Responses to these questions will be compared using the univariate comparison methods described above.



    1. A comparison of organizational/market characteristics. Organizational/market characteristics such as age of entity, size, ownership type (if applicable), geographic region and urbanicity will be drawn from external databases to compare users and non-users. Due to the heterogeneity of the measures across entity types, these comparisons will be stratified by entity group.



    1. Predicting users. Using the organizational and market characteristics, we will use bivariate logit or probit models to estimate the probability of the entity being a databank user. This analysis will also be stratified by entity group.



  1. Special Tabulation Comparisons: Other analysis may be conducted on specific variables identified by HRSA prior to conduct of the survey, such as the level of clinical privilege actions taken as a result of the reports (e.g., 0=no action; 1=limited privileges, 2=limited suspension, 3=revocation). For these comparisons, we will first present the number of actions in tabular format, including a breakdown by the type and size of entity. Next, an ordered logit or probit model may be used to conduct predictive equation modeling using explanatory variables such as organizational or market characteristics and number of queries/matched responses by organization to predict the independent variable selected for study.


Proposed Articles


The final papers to be developed from our results will be primarily based on our findings. The plan is that, it least one paper directed at the health services research literature and at least one paper for a high-profile medical journal. The topics of these papers might include:


  1. Use and usefulness of the NPDB to hospitals,

  2. Use and usefulness of the NPDB to managed care organizations,

  3. Use and usefulness of HIPDB.


Beyond the survey reports, there is very little information on either of the databanks currently available in the medical or health services research literature. And, unfortunately, most of the information that is published in relevant journals is in the form of editorials and commentary. This project can provide strong, science-based studies for publication in the appropriate literature. These articles will raise awareness of the data bank and the importance of its mission.



Contents of the Final Report


The report will be prepared with emphasis on clear and policy-relevant results, and will use graphical presentation techniques as much as possible. We anticipate that the draft findings report will include the following key areas of interest:



  1. Abstract

  2. Executive summary of major findings;

  3. Introduction/Background

  4. Study overview of research including:

    1. research questions

    2. brief overview of the study design and data collection activities;

  5. Summary of findings: Querying Users (findings from the querying questionnaire including overall satisfaction scores, satisfaction scores for timeliness, average usefulness score, query preparation time)

    1. NPDB queriers

    2. HIPDB queriers

    3. Comparisons between the NPDB and HIPDB users/queriers (e.g., is the satisfaction with NPDB higher than HIPDB, differences in usefulness of information and their ability to make decisions)

  6. Summary of findings: Reporting Users (findings from the reporting questionnaire including overall average satisfaction scores, report preparation time)

    1. NPDB reporters

    2. HIPDB reporters

    3. Comparisons between the NPDB and HIPDB users/reporters

  7. Summary of findings: Users who Received a Match Response (findings from the match response questionnaire including percent which yielded useful information, new information, information which was influential in decision making)

    1. NPDB users who received a match response

    2. HIPDB users who received a match response

    3. Comparisons between the NPDB and HIPDB match response users

  8. Comparisons between reporters and queriers (difference in satisfaction, are the queriers’ scores higher than reporters’ scores?)

    1. NPDB reporters vs. queriers

    2. HIPDB reporters vs. queriers

  9. Summary of findings: Non-users

    1. NPDB nonusers

    2. HIPDB nonusers

    3. Comparisons between the NPDB and the HIPDB nonusers

  10. Comparisons between users and non-users (including importance of databank reports, sources of information used, use of outside agencies)

    1. NPDB users vs. non-users

    2. HIPDB users vs. non-users

  11. Longitudinal comparisons, 1994-2006 (including report preparation time, query preparation time, matching errors, use of an authorized agent)

  12. Summary and recommendations for improvements;

  13. Recommended areas of future research; and

  14. References




17. Exemption for Display of Expiration Date


The expiration date will be displayed.


18. Certifications


The certifications are included in the package.


Page 15 of 15

File Typeapplication/msword
File TitleThis is a request to conduct a new survey of entities that have the legal obligation or entitlement to query and/or report to th
AuthorGallup User
Last Modified ByLWright-Solomon
File Modified2008-02-21
File Created2008-02-21

© 2024 OMB.report | Privacy Policy