Imperfections exist in even the finest test equipment which can cause less than ideal measurement results. Some of these imperfections that contribute to measurement errors are repeatable and predictable over time and temperature and can be removed, while others are random and cannot be removed. Network analyzer error correction is based on the measurement of known electrical standards, such as a through, open circuit, short circuit, and precision load impedance.
The effect of error correction on displayed data can be dramatic (Figure 1). Without error correction, measurements on a bandpass filter show considerable loss and ripple. The smoother, error-corrected trace produced by a two-port calibration subtracts the effects of systematic errors and better represents the actual performance of the device under test (DUT).
This application note describes several types of calibration procedures, including the popular ShortOpen-Load-Through (SOLT) calibration technique, and Through-Reflect-Line (TRL). The effectiveness of these procedures is illustrated by the measurement of high-frequency components such as filters. Calibrations are also shown for those cases requiring coaxial adapters to connect the test equipment, DUT, and calibration standards.
Table of Contents
Sources and Types of Errors
All measurement systems, including those employing network analyzers, can be plagued by three types of measurement errors:
Systematic errors (Figure 2) are caused by imperfections in the test equipment and test setup. If these errors do not vary over time, they can be characterized through calibration and mathematically removed during the measurement process. Systematic errors encountered in network measurements are related to signal leakage, signal reflections, and frequency response. There are six types of systematic errors:
(The full two-port error model includes all six of these terms for the forward direction and the same six (with different data) in the reverse direction, for a total of twelve error terms. This is why two-port calibration is often referred to as twelve-term error correction.)
Random errors vary randomly as a function of time. Since they are not predictable, they cannot be removed by calibration. The main contributors to random errors are instrument noise (e.g., IF noise floor), switch repeatability, and connector repeatability. When using network analyzers, noise errors can often be reduced by increasing source power, narrowing the IF bandwidth, or using trace averaging over multiple sweeps.
Drift errors occur when a test system’s performance changes after a calibration have been performed. They are primarily caused by temperature variation and can be removed by additional calibration. The rate of drift determines how frequently additional calibrations are needed. However, by constructing a test environment with a stable ambient temperature, drift errors can usually be minimized. While test equipment may be specified to operate over a temperature range of 0 °C to +55 °C, a more controlled temperature range such as +25 °C ± 5 °C can improve measurement accuracy (and reduce or eliminate the need for periodic recalibration) by minimizing drift errors.
Please have a salesperson contact me.
*Indicates required field
A sales representative will contact you soon.