Biomonitoring of Great Lakes Populations
Laboratory Procedures Document
General
The state health department laboratories participate in the Centers for Medicare and Medicaid Services (CMS) approved proficiency testing (PT) program for various test categories including routine chemistry and toxicology (See Attachment 7c. CLIA certificates). In addition, the state laboratories all participate in the Arctic Monitoring and Assessment Program (AMAP) external PT program to be sure on a periodic basis that accuracy and precision of their results are within acceptable limits. Each state laboratory retains documentation of their recent external proficiency testing results (See Attachment 7d. Contact Information for Proficiency Test Reports and Laboratory Standard Operating Procedures).
A policy and procedures manual that will be used when conducting laboratory analysis was issued to all state grantees. This document was adapted from The National Biomonitoring Program, Division of Laboratory Sciences (DLS), National Center for Environmental Health (NCEH), Centers for Disease Control and Prevention. The document outlines the quality assurance (QA) and quality control (QC) procedures to be followed when analyzing participant specimens in order to ensure the quality of the laboratory results. In addition, state laboratories will submit QC charts on a periodic basis to ATSDR. Below is a summary of the laboratory guidelines that were provided to the state health departments.
Specimen Collection, Submission. Handling and Storage
The goal of specimen submission, handling and storage is to optimize the accurate and reliable measurement of analytes of interest. The state laboratories will use standardized protocols for collecting, handling and storing of biological specimens (blood and urine). The blood and urine collection protocols are specific for environmental contaminants and are utilized by the DLS. All blood samples will be drawn by qualified and trained phlebotomists. Urine specimens will be collected in sterile urine cups. State laboratories will follow their existing chain of custody procedures for receipt of biological specimens.
Identification of Specimen
Personal identifiers (e.g., names) will not be included on test specimens. The state laboratories will use internal study ID numbers to identify and track individual samples. All labels on specimens submitted for analysis will be printed in bar-code format with the appropriate study ID number. The master list linking the specimen ID number to the identifying information will be maintained by the PIs at Michigan Department of Community Health (MDCH) and New York State Department of Health (NYSDOH) and will be destroyed at the end of the study period. Since the Minnesota Department of Health is working with a tribal entity, the master list will be maintained by the ATSDR will not receive information that links participants name and study ID number. ATSDR will only receive de-identified data.
Quality Control (QC)
In this context, quality control (QC) procedures are for monitoring and evaluating the quality of the analytical testing process of each method to assure the accuracy and reliability of the test results. Thus, QC procedures referenced in this document refer to the analytical phase of testing and do not refer to the pre-analytic (e.g., collection) or post-analytic (data analysis) phases. QC as described below is viewed as one part of the overall QA process.
Method Specific QC Procedures
Specific QC procedures are written for each analytical method that will be performed by each state grantee. These procedures are described in the standard operating procedures (SOPs) that are kept on file by the laboratory supervisors at each state health department (See Attachment 7d for state contact information). Some general aspects of QC procedures are described below.
QC definitions
1) Internal ("bench") quality control - internal quality control (QC) is the evaluation of analytical performance that includes QC samples for which the analyst knows the expected measurement result. Internal quality control materials that are made by weighing or spiking (adding in) a known amount of analyte into the matrix qualify under CLIA as "calibration materials" and may be used in
"calibration verification". Internal QC materials may also evaluate particular aspects of
method performance, such as a "blank"(or zero concentration) internal QC material.
2) External ("blind") quality control -external quality control is the evaluation of analytical performance that includes QC samples for which the analyst does not know the expected measurement result. The analyst is "blind" to the expected measurement result. A "blank" QC material is not an acceptable external ("blind") QC material, although it is an appropriate internal ("bench") QC material.
Standards are acceptable external ("blind") QC specimens providing the analyst does not know the expected result for the standard.
3) Proficiency testing - proficiency testing is one kind of external quality control in which the analytical performance of a method is evaluated using specimens provided on a periodic basis (usually every 3 or 6 months).
4) Analytical run (sometimes referred to just as a run) - a set of samples that are analyzed in a time period within which the measurement system is considered to have stable accuracy and precision. The time period for a run may not exceed 24 hours. An analytical run will consist of both quality control specimens and participant specimens. When an analysis requires multiple steps that may require periods longer than 24 hours (e.g. extensive sample preparation followed by gas chromatography - mass spectrometry analysis), the time period for analysis of a run on the analytical instrumentation (e.g., mass spectrometer) shall not exceed 24 hours.
5) Calibration - defined by CLIA as "the process of testing and adjusting an instrument, kit or test system to provide a known relationship between the measurement response and the value of the substance that is being measured by the test procedure."
6) Calibration material - defined by CLIA as "a solution which has a known amount of pure analyte weighed in, or has a value determined by repetitive testing using a reference or definitive test method". Many standards are therefore “calibration materials.” NIST SRMs and other SRMs qualify as calibration materials.
7) Calibration verification - defined by CLIA as "the assaying of calibration materials in the same manner as patient (participant’s) samples to confirm that the calibration of the instrument, kit, or test system has remained stable throughout the laboratory's reportable range for patient (participant’s) test results.
QC Requirements of Each Analytical Run and Choice of QC Concentrations
Analysis of patient (participants) samples is organized into analytical runs. When analyzing participant specimens, the minimum QC requirements for an analytical run that is that the run must include at least two internal ("bench") QC specimens, one of which may be a blank QC specimen. Additional
QC specimens may be added to meet the quality needs of a particular method.
Overview of the Relationship between Internal QC, Proficiency Testing and External QC
To be sure each analytical run is in control, the laboratory quality control programs include internal (“bench”) QC in each analytical run.
Internal ("bench") QC
The goal of internal QC is to provide a rapid feedback to the analyst on the performance of the measurement process to be sure analytical results and factors affecting analytical results are within acceptable limits. At least two internal QC specimens must be included in each analytical run, one of which may be a QC blank specimen. The QC material used for an internal QC specimen may be a CLIA calibration material (i.e., "a solution which has a known amount of analyte weighed in or has a value determined by repetitive testing using a reference or definitive test method"), or other material that has levels of the analyte that are useful for monitoring method performance.
This QC specimen must be characterized by at least twenty (20) analytical runs to determine appropriate QC parameters before it is used in the QC process. Results from these twenty (20) runs are used to determine the concentration of the analyte of interest and the precision of the laboratory method, which is required to establish QC limits. Standard Shewhart QC charts are maintained for this internal QC specimen. A separate QC chart is to be maintained for each QC material used for this internal QC specimen. Standard criteria for run rejection based on statistical probabilities are used to declare a run either in control or out-of-control. QC abbreviations used in the rules are as follows:
Si = Standard deviation of individual results
Sm = Standard deviation of the run means
Sw = Within-run standard deviation
QC rules will depend on the number of QC pools per run and the number of QC results per pool.
The following sets of rules are provided a guidance to accommodate the range from one QC pool per run through three QC pools per run. For each number of QC pools per run, the rules are divided into categories of one QC result per pool and two or more QC results per pool.
QC rules for: Analytical run with 1 QC pool per run (must also include a blank QC specimen)
One QC pool per run with one QC result per pool:
1) If QC run result is within 2Si limits, then accept the run.
2) If QC run result is outside a 2Si limit - reject run if:
a) Extreme Outlier – Run result is beyond the characterization mean +/- 4Si
b) 1 3S Rule - Run result is outside a 3Si limit
c) 2 2S Rule - Current and previous run results are outside the same 2Si limit
d) 10 X-bar Rule – Current and previous 9 run results are on same side of the characterization mean
e) R 4S Rule – The current and the previous run results differ by more than 4Si. Note: Since runs have a single result per pool and only 1 pool, the R 4S rule is applied across runs only.
One QC pool per run with two or more QC results per pool:
1) If QC run mean is within 2Sm limits and individual results are within 2Si limits, then accept the run.
2) If QC run mean is outside a 2Sm limit - reject run if:
a) Extreme Outlier – Run mean is beyond the characterization mean +/- 4Sm
b) 3S Rule - Run mean is outside a 3Sm limit
c) 2 2S Rule – Current and previous run means are outside the same 2Sm limit
d) 10 X-bar Rule – Current and previous 9 run means are on same side of the characterization mean
3) If one of the two QC individual results is outside a 2Si limit - reject run if:
a) R 4S Rule – Within-run range for the current run and the previous run exceeds 4Sw (i.e., 95% range limit)
QC rules for: Analytical run with 2 QC pools per run
Two QC pools per run with one QC result per pool
1) If both QC run results are within 2Si limits, then accept the run.
2) If 1 of the 2 QC run results is outside a 2Si limit - reject run if:
a) Extreme Outlier – Run result is beyond the characterization mean +/- 4Si
b) 3S Rule - Run result is outside a 3Si limit
c) 2S Rule - Both run results are outside the same 2Si limit
d) 10 X-bar Rule – Current and previous 9 run results are on same side of the characterization mean
e) R 4S Rule – Two consecutive standardized run results differ by more than 4Si. Note: Since runs have a single result per pool for 2 pools, comparison of results for the R 4S rule will be with the previous result within run or the last result of the previous run. Standardized results are used because different pools have different means.
Two QC pools per run with two or more QC results per pool
1) If both QC run means are within 2Sm limits and individual results are within 2Si limits, then accept
the run.
2) If 1 of the 2 QC run means is outside a 2Sm limit - reject run if:
a) Extreme Outlier – Run mean is beyond the characterization mean +/- 4Sm
b) 3S Rule - Run mean is outside a 3Sm limit
c) 2S Rule - Both run means are outside the same 2Sm limit
d) 10 X-bar Rule – Current and previous 9 run means are on same side of the characterization mean
3) If one of the 4 QC individual results is outside a 2Si limit - reject run if:
a) R 4S Rule – Within-run ranges for all pools in the same run exceed 4Sw (i.e., 95% range limit).
Note: Since runs have multiple results per pool for 2 pools, the R 4S rule is applied within runs
only.
QC rules for: Analytical run with 3 QC pools per run
Three QC pools per run with one QC result per pool
1) If all 3 QC run results are within 2Si limits, then accept the run.
2) If 1 of the 3 QC run results is outside a 2Si limit - reject run if:
a) Extreme Outlier – Run result is beyond the characterization mean +/- 4Si
b) 3S Rule - Run result is outside a 3Si limit
c) 2S Rule - 2 or more of the 3 run results are outside the same 2Si limit
d) 10 X-bar Rule – Current and previous 9 run results are on same side of the characterization mean
e) R 4S Rule – Two consecutive standardized run results differ by more than 4Si. Note: Since runs have a single result per pool for 3 pools, comparison of results for the R 4S rule will be with the previous result within the current run or with the last result of the previous run. Standardized results are used because different pools have different means.
Three QC pools per run with two or more QC results per pool
1) If all 3 QC run means are within 2Sm limits and individual results are within 2Si limits, then accept the run.
2) If 1 of the 3 QC run means is outside a 2Sm limit - reject run if:
a) Extreme Outlier – Run mean is beyond the characterization mean +/- 4Sm
b) 3S Rule - Run mean is outside a 3Sm limit
c) 2S Rule - 2 or more of the 3 run means are outside the same 2Sm limit
d) 10 X-bar Rule – Current and previous 9 run means are on same side of the characterization mean
3) If one of the QC individual results is outside a 2Si limit - reject run if:
a) R 4S Rule - 2 or more of the within-run ranges in the same run exceed 4Sw (i.e., 95% range limit). Note: Since runs have multiple results per pool for 3 pools, the R 4S rule is applied within runs only.
No results for a given analyte are to be reported from an analytical run that has been declared out-of-control for that analyte as assessed by internal ("bench") QC. Other method-dependent quality criteria that apply to individual specimens (e.g., quantity not sufficient, inadequate quality of specimen, signal to noise ratio greater than 3.0 or post-extraction or post-chromatography analyte recovery greater than 60%) should also be used to determine whether to accept or reject an analytical result. For each method, quality control results and remedial actions for out-of-control conditions are to be documented in a QC Manual.
Proficiency Testing
Proficiency testing (PT) evaluates the quality of the measurement process on a periodic basis, usually quarterly or semi-annually. PT samples are to be handled and analyzed the same as participant samples.
Calibration Verification
Calibration verification is defined by CLIA as "the assaying of calibration materials in the same manner as patient (participant) samples to confirm that the calibration of the instrument, kit, or test system has remained stable throughout the laboratory's reportable range for patient test results". Thus calibration verification is to assure that the accuracy of the measurement process across the reportable range is maintained over time.
Calibration verification is to be performed by analyzing calibration materials that represent the lowest end, the middle portion and highest end of the reportable range. CLIA defines calibration materials as "a solution which has a known amount of analyte weighed in or has a value determined by repetitive testing using a reference or definitive test method". Analytical standards usually satisfy the CLIA definition of calibration material. NIST standards qualify as calibration materials.
The low and high calibration materials must bracket the reportable range; i.e., results should not be reported as a numerical value unless they are between the low and high upper end of the reportable range and the method permits sample dilution, the sample may be diluted to bring it within the reportable range. If dilution was necessary, the reported value will exceed the upper end of the reportable range. The minimum frequency of calibration verification is once every six months. Calibration verification will be performed after any change in the analytical procedure which is likely to make a non-trivial difference in sample results (e.g., complete change of reagents, change of GC column). The process of calibration verification should take no more than three analytical runs.
Changing QC Materials
When a QC material is changed, the new material must be re-evaluated as described above for a new QC material in order to estimate the concentration of the target analyte and appropriate QC limits. If the QC material is from a manufacturer (e.g., manufacturer-supplied QC materials with RIA or EIA kits, or clinical analyzers), then each new lot should be considered a new QC material and should be characterized by the laboratory before its use – even though the manufacturer’s stated target value may be very close to that of the previous lot of material. Evaluation of new lots of materials should always be overlapped with use of previous lots of materials.
Reference Materials used for Preparation of Calibrators and Quality Control Materials
For all measurements, best available reference materials should be used to assure accuracy. Reference materials available from NIST or other recognized national and international scientific organizations should be used if available. Concentrations of other materials that are to be used for calibrators or quality control materials should be checked against best available reference materials from NIST or other recognized national and international scientific organizations.
For many of the proposed analyses, reference materials, including isotopically labeled internal standards, are synthesized by outside contractors and these materials are used to prepare calibrators and quality control materials. For these custom synthesized or custom prepared reference materials, the following purity checks should be completed by either the outside contracting laboratory or by the state laboratory. Documentation of these purity checks is to be maintained for each material.
For small molecules (typically < 1000 daltons), required purity checks are:
1) proton nuclear magnetic resonance (NMR) spectrum
2) carbon-13 NMR spectrum
3) mass spectrometry full scan
4) mass spectrometry selected ion monitoring scan
5) one of the following three quantitative assays:
a. elemental analysis
b. ultraviolet (UV) analysis with a reliable molar extinction coefficient
c. differential scanning calorimetry
For elements and radionuclides, required purity checks are:
1) elemental analysis
For peptides, required purity checks are:
1) mass spectrometry full scan by MALDI TOF mass spectrometry or LC-ESI mass spectrometry
2) isotope dilution amino acid analysis (AAA) using NIST certified amino acid standards
For proteins, required purity checks are:
1) enzymatic digestion followed by peptide quantitative analysis by isotope dilution LC/MS/MS. Digestion conditions should be used that assure completeness of enzymatic digestion. To quantify peptides resulting from enzymatic digestion, peptide standards should be used which have concentrations established by isotope dilution amino acid analysis.
2) a label-free approach to quantify proteins is adequate to quantify proteins for certain uses.
Preparation of Calibrators and Quality Control Materials
Calibrators and quality control materials are prepared by diluting reference materials into an appropriate matrix. The most accurate and precise dilution method with the lowest uncertainty should be used. Both gravimetric and volumetric based dilution methods may be used to create calibrators and quality control materials. Each preparation step should be individually considered to minimize bias and random error. For example, viscous liquids may require the use of positive displacement pipettes to reduce bias that has been observed for air displacement pipettes. Two aspects of preparation of calibrators and quality control materials merit special attention to minimize error. First, use of serial dilutions should be minimized. Every attempt should be made to produce calibrators from reference materials using no more than two serial dilutions. Three serial dilutions is the upper allowable limit. If more than three serial dilutions are needed, approval from the laboratory supervisor is needed. Second, small volume pipetting should be avoided. In general, volumes < 50 μL should not be pipetted in preparation steps. For unusual circumstances where such small volumes may be needed, laboratory management approval is required and special care should be taken to assure well-calibrated pipettes that are calibrated for the specific volume to be used. Calibrator preparation steps including pipetting of volumes less than 10 μL is discouraged and requires appropriate approval. Pipettes used for these smaller volumes should be calibrated every 6 months and calibration target concentrations should be at or near levels that are used in analytical methods.
Procedure when Reference and Control Materials are Not Available
When reference and control materials are not available, then appropriate controls will be made by spiking a known quantity of the target analyte into an appropriate matrix (e.g., blood, serum, or urine). Every effort should be made to obtain highest purity materials for spiking the appropriate matrix. The QC material is to be characterized by at least twenty (20) analytical runs to adequately estimate the concentration of the target analyte and the QC limits. It may then be used as a quality control material for either internal or external QC.
Qualitative tests
Positive and negative controls are included in each run of patient specimens when the analytical procedure is qualitative. The positive specimen must be measured as positive and the negative result as negative for the run to be in-control.
Quality Control Records and QC Manual
Records of all quality control results will be maintained for at least two years after the end of the study period. A QC Manual is to be maintained for each assay which documents quality control results, out-of-control conditions and remedial actions taken to correct out-of-control conditions. The QC Manual also includes results of Proficiency Testing and remedial actions taken to correct unacceptable performance in PT.
Laboratory Notebook
A laboratory notebook with bound and numbered pages is to be maintained that records laboratory procedures and steps followed for preparation of calibration standards, preparation of QC pools, screening of reagents (if appropriate), method development experiments, method validation experiments, and other laboratory activities that reflect the work of the laboratorian day-to-day in developing or improving methods or running analytical methods.
Instrument Log Book
Information associated with installation, configuration, maintenance, repairs, consumables, and usage of laboratory instruments and equipment is to be kept in an Instrument Log Book developed and maintained specifically for each instrument or piece of equipment. The format for the Log Book can be specified by the laboratorian in a manner most conducive to record these instrument operation parameters. The Instrument Log Book and QC Manual are distinct from each other in that the Instrument Log Book contains information uniquely associated with the specific instrument or piece of equipment, whereas the QC Manual contains the information for the analytical method. The Instrument Log Book is instrument-centric whereas the QC Manual is analytical method-centric.
Test Methods, Equipment, Reagents, Supplies, and Facilities
For each analytical procedure, analysts must use equipment, reagents, materials and supplies that are appropriate for achieving acceptable accuracy, precision, analytical sensitivity and analytical specificity. The documentation of individual analytical procedures included in the Analytical Procedures Manual (APM) specifies acceptable equipment, reagents, materials and supplies. If special requirements concerning water quality, temperature, humidity, electrical power or other conditions are required for acceptable method performance, then these must be described for each procedure in the APM. Special procedures to monitor these requirements are also to be included in the documentation of the individual procedure. The lack of specification of such requirements in the documentation of the analytical procedure means that the expected day-to-day variation in any of these parameters is acceptable for proper method performance. Documentation of problems with conditions required (e.g., water quality) for acceptable method performance is to be maintained in the log book of the primary instrument used in the analysis. This documentation must include remedial action.
Labeling of reagents, solutions, supplies
Reagents, solutions, and other supplies must be labeled to indicate the identity of contents, the concentration (if significant), the recommended storage requirements, the preparation and expiration date and any other pertinent information required for proper use. Reagent, solutions and other supplies are not to be used when they have exceeded their expiration date. If a manufacturer's kit is used for an assay, components of reagent kits of different lot numbers are not to be interchanged unless the manufacturer specifies that this is acceptable analytical practice
Facilities and Safety
Laboratories should be arranged to ensure that adequate space, ventilation and utilities are available for all phases of testing: pre-analytic, analytic and post-analytic, and that safety concerns are properly addressed at all times.
Analytical Procedure Manual and Method Performance Specifications
All procedures performed in the laboratory on human specimens are documented in an Analytical Procedure Manual (APM). Individual procedures will also be available at or near the bench site where the procedure is performed.
Contents of the Analytical Procedure Documentation
Each analytical procedure must include, when applicable:
1) requirements for specimen collection and processing, including criteria for specimen rejection
2) step-by-step performance of the procedure, including test calculations and interpretation of results
3) preparation of reagents, calibrators, controls, solutions and other materials used in testing
4) calibration and calibration verification procedures
5) the reportable range for participant test results
6) quality control procedures, including PT materials and programs/procedures used
7) remedial action to be taken when calibration or control results are outside acceptable limits
8) limitation in methods, including interfering substances
9) reference range (normal values)
10) life-threatening or "panic values"
11) pertinent literature references
12) specimen storage criteria
13) protocol for reporting panic values
14) course of action if test system becomes inoperable
15) criteria for referral of specimens (usually not needed)
16) safety considerations for performing the method
Approval and Record Maintenance
Each newly-validated and completed procedure must be reviewed and approved by the laboratory supervisor with a dated signature. Each significant change in a procedure must be approved, signed and dated by the laboratory superviosr. The procedure must include the dates of initial use and discontinuance, if discontinued. The procedure documentation is to be moved to the “Archived Methods” notebooks and maintained for at least two years after discontinuance of the method.
Method Performance Specifications
Method performance specifications for each analytical method must be established or verified:
1) Accuracy - accuracy of each analytical method will be determined by analysis of QC reference materials as described in the quality control section.
2) Precision - precision of each analytical method is determined by analysis of QC reference materials as described in the section of this manual on quality control.
3) Analytical sensitivity – if applicable to the method, the limit of detection is included in the documentation of the analytical method. The limit of detection (LOD) of an analytical method is determined according to procedures described Limit of Detection section of this document. An excellent resource on the interpretation of limit of detection and limit of quantitation is Quality Assurance of Chemical Measurements by J.K. Taylor, Lewis Publishers, Chelsea, Michigan. 1987.
4) Analytical specificity – defined as measuring the correct component, analytical specificity is to be determined for each method including effects of potential interfering substances. This may be verified by the testing the effect of potential interfering substances in method development, by analyzing reference materials, by comparing results on split samples with a method considered more definitive and/or by analyzing a sample of persons (n>50) and examining the measurement output (e.g., the GC tracing) searching for interferences. The appropriate procedures for verifying analytical specificity will vary by analytical method (e.g., isotope-dilution high resolution gas chromatography-high resolution mass spectrometry would require less evaluation than capillary gas chromatography with electron capture detection). Substances which interfere with the analysis in the reportable range are to be listed in the method procedure.
5) Reportable range of test results - the reportable range of test results is to be described in the documentation of the analytical procedure.
6) Reference range (normal range) - if available, the reference range of test results is to be described in the documentation of the analytical procedure and on the test report. The test report is to include the literature references from which the reference range was determined.
7) Other pertinent performance specifications - other performance specifications which are required for adequate method performance are to be specified in the documentation of the analytical procedure.
Equipment Maintenance and Function Checks
Laboratory equipment should be checked regularly to assure acceptable performance. Maintenance (including preventive maintenance) and function checks are to be documented in an equipment log found at or near the piece of equipment. The frequency of maintenance and function checks should follow manufacturer's recommendations, when available. Manufacturer's recommendations must be included in the equipment log. Each analytical procedure outlines the equipment maintenance and function checks for proper method performance and acceptable results from the checks. These checks must be made at the interval specified in the documentation of the procedure. Maintenance and function checks are to be documented in the equipment log. Failure of a function check is to be documented in the equipment log, along with remedial action.
Refrigeration Equipment
Low temperatures are required for storage and preservation of reagents and sample material (analytical specimens), as well as quality control and reference materials. Low temperature storage units with strict temperature requirements initially should be calibrated with calibrated thermometers and checked annually if temperature measurement is used for QC purposes.
The acceptable temperature range for refrigerators used for storage of CLIA-regulated specimens and reagents is 4-8 ◦C and is checked and recorded at least twice weekly. If temperatures are recorded automatically on a continuous monitoring system graph, the individual checking temperatures initials on the QC form that the recording chart was examined. Freezer temperatures are to be monitored weekly (or on day of use if accessed less frequently than weekly) by visual inspection of either a thermometer or solidity of indicator material(s) and recorded. The Laboratory Supervisor determines the range of allowable temperature fluctuation. If this range is exceeded, an on-duty monitor contacts the designated contact person as indicated in writing and posted on each unit. Three contact persons should be designated. Freezers and refrigerators should be regularly monitored for excessive ice deposit, inoperative cooling fans, and frayed or worn electrical power connections. Problems should be reported to the Laboratory Supervisor for action.
Calibration and Calibration Verification
Calibration and calibration verification are specified in the documentation of each analytical procedure in the Analytical Procedures Manual (APM). General discussion of calibration verification is provided in the section of this manual on quality control. For all methods that are of high-complexity, calibration equivalent to CLIA calibration verification will be performed each day that the method is run. Calibration curves that use CLIA calibration materials and span the reportable range are a routine part of a day’s analyses for these methods. QC is performed every analytical run verifying calibration is within acceptable limits.
Comparison of methods performed on multiple instruments or at multiple sites
At least once every six months, a set of at least five samples spanning the reportable range of the analyte(s) of interest are run on both instruments. If the reportable range differs on two instruments, they shall be compared in a concentration range that is included in the reportable range of each instrument. The Pearson correlation coefficient of the compared results should be greater than 0.95, if not, appropriate remedial action should be taken. In special situations, the laboratory PI may give written approval that the methods are sufficiently similar for the intended use of the data.
Calibration curve: placement of calibrators, calculation of slope and intercept
A minimum of five calibrators should be used for a calibration curve. They should be spaced throughout the desired measurement range, avoiding an extreme calibrator that is an unusually large distance from the adjacent calibrator, especially on the high end of the calibration curve. In unweighted regression analysis, extreme calibrators can unduly influence the slope estimate of the calibration curve. Calibrators should bracket the measurement range. Samples that exceed the high calibrator can be diluted to bring them within the measurement range.
If concentrations at or slightly above the LOD are to be reported, then a calibrator must be at or slightly above the LOD. Results below the lowest calibrator are not to be reported. If a result is below the lowest calibrator and higher than the LOD, then the result should be reported as below the measurement range for the method and higher than the LOD, but no number should be provided. In unusual circumstances requests for estimates of results below the LOD can be provided to avoid errors associated with multiple imputation of values less than the LOD in multiple regression analysis. The slope and intercept of the calibration curve should be calculated using linear regression.
If calibrators are evenly spaced, unweighted regression is appropriate, but 1/x weighting is also acceptable. If results are likely to often fall at the low end of the measurement range, then 1/x weighting is recommended. Using 1/x weighting, the relative influence of higher calibrators on the slope estimate is diminished. The independent variable (x-axis variable) in the regression is concentration of the calibrators. Log of calibrator concentration is an acceptable independent variable for regression, when the calibration curve spans several orders of magnitude.
Calibration curve: assessment of linearity over the measurement range
In the method validation, linearity of the measurement range (e.g., from lowest to highest calibrator) should be verified by visual examination of a residual regression plot (assuring no curvilinear shape of residuals) and by an R2 that exceeds 0.98. A calibration curve with an R2 between 0.95 and 0.98 may be used with approval of the Laboratory Supervisor, who must assure that the accuracy is adequate for intended use of the measurement. In addition, a quadratic and cubic polynomial regression should be run for calibration curves during method validation. The coefficients of the squared and cubic terms in each of the two polynomial regressions should not be statistically significant (p > 0.05). If the coefficient of the squared or cubic term is statistically significant, evidence for non-linearity exists, a statistician should be consulted. A slightly curvilinear calibration curve may be used for analytical measurement. These curves should not have horizontally flat regions or sharp vertical rises, since these regions would produce a large concentration change for a small measurement change (e.g., change in native to label ratio) or a small concentration change for a large measurement change. Both of these conditions lead to relatively large error in concentration estimates. A calibration curve that is slightly curvilinear needs to be approved by the Laboratory Supervisor for use.
Calibration curve: use of matrix based and non-matrix based calibrators
Calibrators should be in the same matrix as unknown samples to be analyzed. In some situations, residual amounts of the target analyte are present in serum, blood, urine or other matrix. For example, this situation can occur when an environmental chemical is found in a small amount in serum or urine (e.g. a pesticide in urine). Spiking serum (or urine) with a known amount of analyte to make a calibrator will result in a calibrator whose concentration is the spike amount plus the residual amount of the chemical already present. Consequently, the true amount of analyte in the calibrator is not known. Use of an alternate matrix with no residual amounts of the analyte of interest can resolve this problem. If available, all methods for organic analytes will use isotope dilution mass spectrometry with an isotopically labeled internal standard. For such methods, use of a matrix for calibrators that is not the same as for unknowns should theoretically not be a problem. In such a case, any matrix effect on the analyte should be the same as for the isotopically labeled internal standard and since the ratio of the two is used for calibration, a potential matrix effect should cancel out. Nonetheless, the equivalency of results in an alternate matrix (e.g., phosphate buffered saline) needs to be demonstrated as part of the method validation.
The following process should be used to demonstrate matrix equivalency
1) Prepare 15 samples in duplicate (total of 30 samples) at concentrations that span the measurement range. Samples should be in the same matrix as used for unknowns.
2) Analyze the 30 samples to obtain the ratio of the native (the target analyte) to its labeled internal standard for each sample.
3) Calculate concentration using a calibration curve that is constructed with calibrators in the same matrix as used for unknowns (e.g., serum, urine, blood). Refer to this method as Method A.
4) Calculate concentration using a calibration curve that is constructed with calibrators in an alternate matrix (e.g., phosphate buffered saline, synthetic urine). Refer to this method as Method B.
5) Let diff = Method A conc – Method B conc; let conc = ([Method A conc + Method B conc]/2)
6) Plot diff vs. conc. This plot has several names: a bias plot, difference plot, or Bland-Altman plot.
7) Visually examine the plot for nonlinearity. Consult statistician if the plot appears nonlinear.
8) Regress diff vs. conc. The slope should be near zero. Examine the p-value for the slope.
a. If the p-value for the slope is > 0.05, then the difference in slope between the two calibrations curves is not statistically different and the alternate matrix can be used.
b. If the p-value is < 0.05, then the laboratory PI must confirm that the absolute
magnitude of the slope difference is not consequential in terms of the intended use
of the analytical results. Otherwise, the alternate matrix cannot be used for calibrators.
Note: the 30 samples need be run on the instrument only once and data processing calculations use the matrix based calibration curve for Method A and the alternate matrix calibration curve for Method B.
Comparison of Two Analytical Methods Measuring the Same Analytes
As new measurement techniques become available and improvements on current methods advance, new methods need to be compared with old ones to assess comparability of accuracy, precision, and sensitivity. Comparison of two analytical methods measuring the same analytes requires statistical evaluation of accuracy and precision (including limit of detection) primarily based on analysis of split samples and quality control samples. The specificity of a method is to be separately verified for each method and is not directly addressed under these guidelines for method comparison. Bias or concentration-associated trends are best detected by split sample analysis using correlation and difference plots with regression analysis. Split sample analysis provides some information on precision. Precision is primarily estimated by repeat analysis of samples from at least two QC pools that span the measurement range.
Split sample analysis
Split sample analysis means that one aliquot is drawn from the sample to begin method A and a separate aliquot is drawn from the same sample to begin method B. In this discussion, method B is considered the new or most recent method. The most important process to compare method A with method B is to perform split analyses on at least 30 specimens which span the range of levels of interest in the application of the method. Fifty samples is preferable to 30 and 100 – 300 samples should be used if relatively small changes in the method accuracy or precision have potentially high public health impact. Using more than 300 samples affords little additional benefit. Results of the split analyses should be statistically analyzed using a correlation plot with regression analysis and a difference plot with regression analysis. If the range of levels covered by the method is very large (e.g., two or three orders of magnitude), or the measurement error of the method increases with increasing concentration, a
transformation of the data should be considered before carrying out the correlation and difference plot analyses. Consult a statistician unless you are highly confident using transformations. Log transformation is a commonly useful transformation.
Correlation plot and regression analysis
a) Construct a plot with method B on the y-axis and method A on the x-axis.
b) Examine for outliers and investigate outlier results. Exclude true outliers. If the overall sample size dips below 30 from outlier exclusion, repeat analysis of these outlier samples to raise sample size to at least 30.
c) Visually examine the plot for linearity and calculate R2 from ordinary least squares regression or error-in-both-variables regression (e.g., Deming regression). If weighted least squares regression is used with the measurement calibration curve to help adjust for variance changes across concentration, it may be used in the regression analysis of the correlation plot and the difference plot.
d) There are no absolute criteria for an acceptable R2 value; but, in general, values < 0.95 should be examined for potential differences between methods. If R2 is < 0.95, then a statistician should be consulted to test for influential observations and whether unusually high random error in method A and/or method B is influencing the correlation. R2 estimates between 0.90 and 0.95 require review by the laboratory supervisor to confirm that the method agreement is acceptable for intended use. R2 estimates < 0.90 require approval by the laboratory director.
e) Verify that the slope is not statistically different from 1 (i.e., p ≥ 0.05) and the intercept is not statistically different from 0 (i.e., p ≥ 0.05). (If the 95% confidence interval of the slope includes 1 then it is not statistically different from 1 (i.e., p ≥ 0.05)). If either of these criteria are not met, then the laboratory supervisor must confirm in writing that that the lack of agreement is of such a small size to not be important for the intended use of the measurement.
Difference plot and regression analysis
a) Construct a difference plot with (method B – method A) on the y-axis and the average of method A and method B (i.e., (method A + method B)/2) on the x-axis.
b) Visually verify that variation in the y-axis [i.e., (method B – method A)] values is approximately uniform across the range of y-axis values. If not, consult a statistician for use of a different y variable, such as (method B/method A) or log (method B/method A).
c) Regress the y variable (e.g., method B – method A) on the x variable (e.g., average of method A and method B). Verify that the slope is not statistically different from 0 (i.e., p ≥ 0.05) and the intercept is not statistically different from 0 (i.e., p ≥ 0.05).
d) If the slope or intercept is statistically significant (p < 0.05), then a Laboratory Supervisor must verify in writing that the magnitude of the slope combined with the best estimate of the intercept is not consequential in terms of the intended use of the measurement (i.e., too small to make a meaningful difference in application of the measurement).
Analysis of quality control (QC) specimens
For method B (the new method), each QC pool should be analyzed at least 20 times (preferably 30 times) in separate runs to establish new QC limits. For each QC pool, the variance (or coefficient of variation (CV)) of the QC analysis for method B should be compared to the variance (or CV) for method A. A statistician may be consulted to determine whether the difference in magnitude of variation is statistically significant. If the variance of method B is statistically greater (p < 0.05) than that of method A, the laboratory supervisor must approve that the variance is still acceptable for intended use of the data.
Proficiency testing
An additional verification of the accuracy of method B is acceptable performance on proficiency testing materials received from outside the laboratory.
Documentation
Documentation of method comparison should be maintained as part of the supporting documentation for method B (the new method).
Method changes that warrant a designation of ‘new method’
Whether changes in an existing method are sufficient in scope for the method to be regarded as ‘new’ is determined by the likelihood of those changes to meaningfully affect the accuracy, precision, sensitivity and/or specificity of the measurements. Typically, a new method is being chosen because it improves accuracy, precision, sensitivity, specificity, throughput and/or cost of analysis. Whether changes are sufficient in scope for the method to be regarded as ‘new’ is an assessment assigned to the laboratory supervisor who should consult with their laboratory director if they have any questions. A ‘new’ method that measures analytes with reference values or previous epidemiologic studies should undergo comparison with the previous methods as described.
Remedial Actions
Remedial actions are to be taken and documented when:
1) Test systems perform outside acceptable performance specifications. Remedial action is to be documented in the appropriate QC manual.
2) Participant test results are outside the reportable range. Remedial action, such as dilution of the specimen until its concentration result is within the reportable range is to be noted in the log book of the primary instrument used in the analysis or on the sample run sheets.
3) Results of control materials and calibration materials fail to meet quality control criteria. Remedial actions are to be noted in the appropriate QC manual.
4) When errors are detected in the reporting of participant results, the Laboratory Supervisor is to notify the grant PI by phone, followed by the issuance of a corrected report within a time period suitable, but not to exceed one week. The corrected report must clearly show in the title that the new results are corrected results. Exact duplicates of the original as well as the corrected report are to be maintained. The corrected report is to be approved and signed by the Laboratory Director.
Quality promotion
High quality laboratory results are generated when all phases of the measurement process (i.e., pre-analytic, analytic, and post-analytic) are conducted properly. Factors which promote high quality results include:
1) competent, well-trained and motivated laboratory staff
2) quality laboratory facilities
3) well-maintained, high-quality laboratory equipment
4) high quality laboratory analytical methods
5) clear commitment of management to quality laboratory results
6) commitment to laboratory safety
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Davis, Stephanie I. (ATSDR/DHS/HIBR) |
File Modified | 0000-00-00 |
File Created | 2021-01-30 |