Contact an Expert

Thread: The difference between "Test Port Compression" and "Receiver Compression"


Permlink Replies: 1 - Pages: 1 - Last Post: Sep 4, 2012 9:22 AM Last Post By: Dr_joel
andy1010

Posts: 1
Registered: 09/04/12
The difference between "Test Port Compression" and "Receiver Compression"
Posted: Sep 4, 2012 8:03 AM
Click to report abuse...   Click to reply to this thread Reply
Hi,

I downloaded the PNA-X data sheet form the link below.
http://cp.literature.agilent.com/litweb/pdf/N5245-90008.pdf

There is compression information on page 46, Table 18 and 19.

My question is what is the difference between "Test Port Compression" and "Receiver Compression"?
Dr_joel


Posts: 2,688
Registered: 12/01/05
Re: The difference between "Test Port Compression" and "Receiver Compression"
Posted: Sep 4, 2012 9:22 AM   in response to: andy1010 in response to: andy1010
Click to report abuse...   Click to reply to this thread Reply
Those tables are a bit confusing, but they refer to the same thing. In fact, the first one is only typical and the second one is only specified (no typical).

The background reason is for two tables is that in the production and more particularly the field service tests it is complicated to get enough test port power to drive the receiver into compression, so the the power level where compression occurs for specified level depends more on the power that the test system can generate. For table 19, as I remember, for traceability reasons, the uncertainty of the measurement is on the order of 0.1 dB, so we look for something on the order of 0.05 dB compression to include guard banding and environmental drift issues (based on temperature chamber results on a few units).

Table 18 shows the typical performance of the receiver, which is measured in a different way. We pad down the test receiver on port 2, drive to maximum power of the source and record the compression of S21(which looks like expansion since S21 is B/R and R is getting smaller due to compression) and assign that to the R-channel. Then, we remove the pad and run the same test and look at compression again (across frequency), remove the R channel effect, and assign that to the test port. In our products, the R channel is almost always padded extra to ensure less compression than the test channel. The value should be below 0.1 dB at the power levels shown. But these are not readily achievable in the field so this test is not done as part of verification and that is why it is not specified.

In actual performance, the compression is very very small until you reach the ADC data read limit, at which point a receiver overload message will appear. But for my work, and what I recommend, is to keep the input below about +10 dBm for most measurements to avoid any compression effects.

Point your RSS reader here for a feed of the latest messages in all forums