they are the same thing!! In the past the Smoothing mechanism was the only way to actually change the default group delay aperture (2 points). However, when used, the status would indicate that smoothing was turned on and that did not sit well with some of our more metrology minded users (so it was never turned on by default and that resulted in default group delay measurements being way more noisy than they had to be). Now we have separated the Smoothing function, which is available on all measurements, from the group delay aperture setting, which is only applied to group delay measurements. We have also set the default aperture to 11 points, which should give you a reasonable looking measurement under the default conditions.
Thanks Dara...now to pull more of Dr. Joel's teeth (remember, I'm NOT an RF guy, I'm a SW guy)...
I'm noticing that the # of points, the group delay aperature (or smoothing) points have the greatest affect on my measurements.
For one of my tests, I have a DUT w/ a 5MHz bandwidth but we're measuring across 8MHz.
I get the following results (approximates): BTW, averaging doesn't seem to have any effect (at least across my DUT's bandwidth - it only matters out-of-band) but I am averaging 5 times right now. Also, the IFBW doesn't seem to have a great impact either but right now I have a 1000 Hz IF bandwidth (but I have played w/ 50 Hz which is what my engineer originally wanted). Since I didn't see a big difference in the variance over the bandwidth I went w/ 1kHz since it goes WAY faster that way.
I have no idea what's the real measurement. I think I need to "calculate" my # of points along w/ the aperature based on bandwidth. Is there some sort of magical formula I should be using?
FWIW, our original Rohde & Schwarz ZVRE measurement that we sold off on used 1601 pts, aperature percentage of .1875 (which comes out to 3 pts), 25 averages and an IFBW of 1kHz. This is the data that I'm trying to recreate using the PNA-X. For that measurement, the variance was about 60ns.
It would be best to know if the DUT is a mixer, or amplifier or filter.
If you could post a saved state (with the sweep on hold) that would give the best information.
Is the variation noise-like (changes sweep to sweep) or stable (systematic, as in calibration related).
Try this: Take the data->mem and data/mem, look at the results. Is the noise (variation) about the same? If so it is due to trace noise on the phase response.
Sometimes this can be improved by wider IF BW and more averaging.
Also, FYI, at leas on some of the R&S boxes there was also a delay aperture set to by default 11 pts, independent of the smoothing function. In this way, the group delay was always smoothed even with smoothing off. Not sure if this was on the ZVR but definitely on later products. This gives the impression of smoother delay than expected with low smoothing settings.
Mixer (VMC used). The variation is VERY stable. Averaging has almost NO effect in-band. I ran a single average 5 times (all w/ the same results) and compared it against a sample w/ 5 averages and the results were virtually identical.
I've had two others check my cal once w/ a pad (to show double the GD of the pad) and once w/ a 50ohm termination where we see a -45dBm GD. So, it seems to me that the cal is good.
Are you saying that there's no formula for determining the proper settings (or you just needed this additional info) say based on the bandwidth of the DUT and measurement? Even my engineer is a little confused now. We realize that the smoothing greatly affects the variation but the engineer is very hesitant to use more than 3 points (that's what he's always used and what he's comfortable w/).
I think you missed some of what I stated earlier (the R&S was set to 3 pts smoothing).
I saw in another thread you mentioned that the IFBW should be 10% of the measured bandwidth. So for my 8MHz measurement (of a 5MHz DUT), I should use a 8kHz IFBW? How many points is proper for this bandwidth?
If the variation in group delay is stable, and it is a mixer measurement, then it is almost certainly due to a calibration issue, most likely due to the characterization of the calibration mixer.
For narrow-band measurements, the GD error of our receivers is quite small, and the error in calibration can be greater than the error in our receivers, if the calibration is not done carefully.
For a test, unselect the calibration calset (Cal:Manage:Calsets then highlight the calset for the channel, hit Unselect and then CLOSE (not OK, which reselects it))
How does the GD flatness look?
I figured it might be a mixer: they complicate GD measurements considerably.
GD is a measure of PHASE VARIATION; group delay = -(change in phase)/(360*Change in frequency); when you set the smoothing, what you are really doing is setting the span over which delta phase and delta-frequency are chosen; Smoothing of 3 points means the phase is taken over 3 points (two steps of frequency).
BTW: yes, I read that you said smoothing was 3 points, but R&S had an additional smoothing-like function, called GD aperture, that applies an additional 11 points smoothing. But, I don't think this is the issue, since you say the trace is not noise-like in the variation. Variation in the GD trace that is stable, sweep-to-sweep is almost always due to either: 1) Calibration error; 2) real variation in the DUT.
Most likely, the characterization of the calibration mixer is to blame. It should be characterized as a separate step (don't use characterization as part of the VMC cal, instead characterize it using the "characterize Cal Mixer" wizard, and recall the file). When doing the characterization, don't put ANY cable from the PNA port to the mxier, but only use a hard-lline connectin (such as and adapter). The characterization results are very sensistive to variation in the test port cable during characterization.
If you save the characterization to an S2P file, then you can recall it to a PNA channel and look at the GD variation of the mixer.
For GD measurements, the combination of SMOOTHING and/or GD APERTURE shoudl be on the order of 5-10% of the BW. this is NOT releated to IF BW, which in combination with the averaging factor, sets the trace-noise, or trace to trace variation.
YOu might consider contacting your local application engineer to come out and help with the setup.
Contacting the local application engineers is not helpful for us since they aren't allowed in the lab. We can no more discuss w/ them details than w/ you and they end up asking you guys anyway (so I'm trying to get rid of the middle man).
I'll talk to the engineer about increasing the smoothing (actually I'm using the GD aperture and turn OFF smoothing) but last time I asked to increase that, he didn't like the idea. You said the "combination" of smoothing and GD aperture. I thought Dara said they were the same. So I'm unsure exactly what you mean for 5-10% of the bandwidth. So w/ 161 pts, each point covers 50kHz, and then I should use 8-16 smoothing pts (provided my math is correct) which should be 9-17 since odd numbers are usually used?
BTW, if I've been unclear, when I say "variation" I mean that to mean the max - min GD over 80% of the unit's bandwidth.
For the R&S, we didn't explicitly set the GD aperture so it probably was defaulted if the settings don't affect each other. I don't have a R&S handy anymore but I can look at the old code and parameter values.
I'll return w/ the answers you wanted on the calibration in a little while.
what I meant by saying that they are the same is that if you turn on for example 5 point smoothing OR you turn on 5 point GD aperture on the same data, you get the same answer. however, now that the two functions are separated, you can use a combination of smoothing and GD aperture. that way instead of having a big smoothing value (which most engineers don't like), you can have a combination of GD aperture and a small smoothing factor, which will give you same effect of a large smoothing factor with less of the undesirable aftertaste
So for Dr. Joel's question on the cal (and I think he's REALLY hating me now)...
We don't have calsets. We save .csa files and recall them. Anyway, so I tried just turning CORRECTION OFF/ON and the difference in the variation "looks" neglible (I'd have to plot them to see and if you need that I can certainly code up both cases). The amplitude changes by about 90 ns but the trace looks identical (at least to my and akalei's eyes) w/ auto-scale on. If you want me to try ANYTHING, I'll do it.
At this point the systems engineer is NOT liking me either since I told them it "appears" like their already sold off data (that they took 2-3 months to come up w/ the settings) may not be accurate. He's now asking me for a 10x sample across ALL cases (which takes many hrs w/ the low IFBW) once I've come up w/ new settings PLUS any reasons why I'm coming up w/ the changes.
We don't have calsets. We save .csa files and recall them.
when you have .csa files (csa = Cal State Archive) you do have calsets. if you don't believe me, go to your PNA and using the cal manager, delete all the calsets that you see there. Now if you rely on .csa files for everything, this shouldn't be a problem, because anything you need will be restored once you recall the appropriate .csa file. after you have deleted all the calsets, recall one of your .csa files that has a correction state. now if you bring up the cal manager again, you will see that there is a calset for every channel in the state that has corrections turned on. Of course, you can just take my word for it and not risk loosing something valuable from your calset directory
How can I not trust the all and mighty powerful Dara S (at least that's what akalei and I call you)? So CH2_CALREG indicated VMC (and we're using channel 2) so I "unselected" it, and X'd the window and the difference is the same as if I turned correction OFF (about 90ns). We had incorrectly assumed the currect calset would be highlighted (and none were).
If turning off calibration doesn't affect the shape much (ignoring offsets) then it is likely that the shape is set by the response of the DUT.
They typical error in VMC measurements with good care is one the order of 0.2-0.5 nsecs. Without calibrations, maybe 0.5-2 nsec. If your variation is a lot bigger, it is from either the DUT, the reference mixer or something else.
Does the test device have its own LO or are you using the PNA port as the LO? If it has its own LO, is it tied to the 10 MHz reference.
And (and I should have thought to ask this first) is the LO fixed, and are you sweeping the RF and IF frequencies; or is the LO swept.
Have you tried the SMC+Phase method? It can use the same mixer as the calibration mixer for VMC and a much simpler setup for delay measurements.
In any event, in 3 weeks or so we are introducing a new calibration method that does not require any reference or calibration mixer, with typical performance of about +-100psec accuracy (up to 27 GHz). Very simple and very straighforward. You might consider switching to it.
Also, if you more than 1 LO frequency, the new method calibrations them all at one time, so it can be much faster.
We had incorrectly assumed the currect calset would be highlighted (and none were).
no you just see the channel number(s) in front of the calset, because the cal manager is showing you the state of all the calibrations in the instrument, but all the actions (other than delete and view properties) are associated with the channel that was active when you brought up the cal manager. A user calset can be used to correct multiple channels, so you might see a list of multiple channel numbers in front of one user calset. Channel Cal Registers on the other hand can only be turned on in their own specific channel and therefore, if you have something like CH2_CALREG, you would only see "2" in the channel list if that channel has correction turned on.