For a spectrum analyzer, dynamic range generally refers to the analyzer's ability to measure distortion. A spectrum analyzer has two primary limitations to such measurements: its own internally generated distortion and its noise floor (sensitivity). This program asks you to enter values based on your spectrum analyzer's specifications and then maps the analyzer's internally generated second harmonic and third-order intermodulation distortion and signal-to-noise ratio for the fundamental tone(s) over a wide range of level's at the analyzer's input mixer.
There is a third possible limitation to dynamic range when measuring third-order intermodulation distortion: phase noise on the analyzer's LO. The program also asks you to enter the analyzer's phase noise in dBc/Hz at an offset equal to the separation between the two test tones (signals). You may then add the phase noise curve to the chart to see to what extent phase noise limits the measurement.
Distortion is a relative measurement. The degree of distortion for the harmonic case is indicated by the relative amplitudes of the second and perhaps higher harmonics relative to a pure fundamental. For the intermodulation case, distortion created by the interaction of two pure tones, the most important distortion products are the close-in third-order products
2 * F1 - F2 and 2 * F2 - F1, where F1 and F2 are the pure tones).
See the following paragraph. Again, the measurement is relative, and the distortion products are measured relative to the test tones. In this discussion and program, we assume that the two test tones are equal in amplitude; however, they need not be. (Intermodulation may be created by more than two tones, but we shall limit the discussion here to the two-tone case.)
Harmonic distortion is relatively easy to minimize: use a low-pass filter. For test purposes a low-pass filter should probably be used to provide a sufficiently pure test signal. However, in a system or component, if the tones that produce the intermodulation distortion are spaced such that they as well as the distortion products fall within the desired frequency band, you cannot reduce the distortion with a filter. Better performance must be designed into the system in the first place. Also, creating the two pure test tones is not trivial. Not only must you minimize the harmonic content, but you must take steps to be certain that the two sources do not interact with each other.
Now that we have pure test tones, we can apply them to a test device, e.g. an amplifier, and measure the output for distortion using a spectrum analyzer. However, as noted above, the analyzer has its own limitations. First of all, the input mixer is a non-linear device (the non-linearity is what produces the intermediate frequency or IF signal) and so always produces distortion products. These distortion products fall at the same frequencies as the distortion products that we wish to measure, so we need to know how our mixer behaves to know whether or not our analyzer can make the measurement.
For second-order distortion products, e.g. second harmonic, the distortion created by the mixer changes as the square of the amplitude of the fundamental. In logarithmic terms, the distortion created by the mixer changes 2 dB for every 1-dB change in the level of the fundamental. If we equate dynamic range to the degree of distortion created by the mixer, we see that the dynamic range of our analyzer for second-order products changes 1 dB for every 1-dB change in the level of the fundamental. For third-order distortion products, e.g. third harmonic or third-order intermodulation, the distortion created by the mixer changes as the cube of the amplitude of the fundamental tone or, in the case of intermodulation tests, of the two tones. In logarithmic terms, the distortion created by the mixer changes 3 dB for every 1-dB change in the level of the fundamental tone or tones, so the dynamic range of our analyzer changes 2 dB for every 1-dB change in the level of the test tone or tones.
For maximum dynamic range, then, it would seem that we want to lower the level of the signal at the mixer as much as possible to minimize the analyzer's contribution to the net distortion that is measured. Unfortunately, there is a second factor that impacts the dynamic range of our analyzer.
That second factor is our analyzer's noise floor. If we set the level at the mixer too low, the internally-generated distortion products fall below the noise floor, and, at least potentially, we have decreased dynamic range. In this case, then, we can equate dynamic range to the signal-to-analyzer's-noise-floor ratio (S/N), where the signal is the fundamental test tone or tones. For every 1-dB change in signal level, there is a 1-dB change in S/N, and the higher the signal level, the greater the dynamic range. Just the opposite of the distortion case.
This program enables you to determine the optimum signal level at the mixer. You are asked to enter specific data about your analyzer: distortion performance at a given mixer level for both second harmonic and third-order intermodulation distortion plus noise floor in terms of displayed average noise level. You can then have the program plot the specific data points and draw the associated curves to show analyzer performance over a range of signal levels at the mixer. With these data, you can determine the appropriate input attenuator setting for the analyzer to optimize the dynamic range for your specific measurement requirement.
The curves are drawn without regard to measurement uncertainty; they simply indicate analyzer performance. If you should happen to set the mixer level such that the analyzer's internally generated distortion is the same level as the distortion you wish to measure, the indicated level of distortion could be anywhere from 6 dB too high to infinite dB too low. A specific error cannot be defined because the phase relationship between the distortion being measured and that of the analyzer is unknown. We can only indicate a window of uncertainty.
We can reduce the uncertainty by reducing the level of the test tones at the analyzer's mixer to reduce the distortion generated by the analyzer. For example, if we set the analyzer's input attenuator such that the internally-generated distortion is 19 dB below the distortion we wish to measure, the uncertainty is about +/- 1 dB.
The program asks you to enter the degree of uncertainty that you wish for the measurement. You can then have the program draw a second set of distortion curves showing new mixer levels vs dynamic range based on the entered uncertainty.
Signals, e.g. distortion components, close to the analyzer's noise floor are displayed erroneously because spectrum analyzers display signal plus noise. However, there is no window of uncertainty involved. For a given displayed signal-to-noise ratio, we can determine the correction to be applied to the indicated signal level to determine the actual signal level. For example, if the displayed signal-to-noise ratio is 3 dB, the correction is about -1 dB. Interestingly enough, if the displayed signal-to-noise ratio is 2 dB, the correction is about 2 dB; the signal level is very nearly equal to the average noise level.
The program asks you to enter an error value. You can then have the program draw a curve above the noise curve at the required displayed signal-to-noise ratio. The new curve shows the mixer level for the test tones vs dynamic range relative to noise based on the entered error value.
When the dialogue box appears, enter the various values called for; then follow the instructions in red on the screen.
There are default values in the program, and these are shown the first time that the dialogue box appears:
1) Analyzer's second-harmonic distortion specification: -70 dBc