There is a discussion at my lab about measuring power meter reference outputs. I measure them using a 432A and a 478A-H76, or a Tegam 1806 or 1830A and a thermistor mount. Some of my associates argue that we could measure the output just as accurately by taking an 8482A and an E4418B and comparing the output results of the 435B-K05 to the UUT power meter's reference output.
My reasons against their approach is that by taking two measurements the mismatch uncertainty and instrumentation error uncertainty will double, not to mention the uncertainty of the 435B-K05's output has to be included. They say that since it is a relative measurement all we'd have to consider in the measurement is the 435B-K05's output uncertainty. What do you think? We need to put this issue to rest one way or another.
The 435B-K05 was set up by HP/ Agilent as a econical tertiary standard for laboratories that could not afford the thermistor mount method and/or rubidium/cesium standards or any disciplined oscillator and/or didn't need that accuracy requirement. Mainly some military labs back in the day and smaller calibration laboratories for field use in remote locations. It was basically a 10811A Quartz OCXO and calibrator from a 435 power meter. The two items were calibrated by the next echelon standards and basically had an accuracy of +/- 1.2% for 1 mW @ 50 MHz and 5 x 10-10 per day for frequency stability. Each of these could be improved with newer uncertainties available and data history.
Yes, your power transfer uncertainties would increase by the VSWR of the transfer power sensor (mag/phase), repeatability of the sensor, source VSWR of the K05 unit, basic transfer repeatability of the meter being used and thermal drift. Those would have to be calculated in to ascertain your new uncertainty of measurement. With a quality sensor with a VSWR lower 1.01 @ 50 MHz your increase in uncertainty would be approximately +/- .3% worst case.