Skip to main content

Refusing Limits in Design Validation

(Left to right) Loren Betts, Bernadette Smith, Mike Dobbert, and Matthew Todhunter.


In our latest Refusing Limits interview, we sit down with a team of Keysight innovators who experience many of the same challenges that our customers face when bringing new technologies to market. Loren Betts, Matthew Todhunter, Bernadette Smith, Mike Dobbert, and Erwin Siegel like to say that “validation begins at home,” not just because they need to ensure that Keysight products meet our future-proofed specifications, but also because they are a cohesive team that seems to truly enjoy the challenge of exceeding customer expectations.

Loren, you and your colleagues fly under the radar a bit, making sure that we can validate even our most ambitious instrument designs. What is the driving force behind this work?

Simply put, we need to accurately convey our specifications to our customers, no matter how cutting-edge the design. The instruments we produce are always pushing the boundaries of metrology. So, this team got together and initiated an “optimized test” program to push more measurement science into our order fulfillment area—our production measurement—where the instruments we manufacture get tested.

We started with a new source [source measure unit] that Matt was working on, one that required a much higher frequency density of calibration and individualized calibration for each attenuator state, where the characterization required a lot of data to be gathered in order to prove out the specification. Using the existing technology at the time was just going to take too long to be practical—something like 40 hours for a test that would typically take 6 hours. Plus, the measurement instruments that we were using to measure this new source didn’t have the accuracy potential that was required. And so that started us down the path of the invention.

Matt, it seems to me like metrologists spend a lot of time thinking about those standards and the consequent requirements, yes?

Sure. When we were thinking about providing our customers with solutions for 5G, we needed to start years earlier to create an instrument that will help them to transmit and receive a 5 GHz signal before there was a widespread need for those capabilities. But we had to go one step further in that we needed to also test such an innovative and limits-stretching device with the instruments on our benches at that time. It’s sort of the chicken and the egg scenario where to achieve significantly higher levels of fidelity and accuracy, we have to apply existing technology to measure new instruments that will be going out to the customer perhaps before there’s even a recognized commercial use. And like our customers, as designs get more complex, we can’t allow the time and costs from testing to increase in parallel.

And like our customers, as designs get more complex, we can’t allow the time and costs from testing to increase in parallel.

So, with that in mind, Matt, what do we do differently to avoid costly rework and delay as complexity becomes harder to wrangle?

Well, we started by rethinking everything we do.

Okay…you’ve got my attention…

What I mean by that is, if you think about it, we’re essentially operating a supply chain under one roof. We are our own supplier of microcircuits and other components that we fit into our instruments. And like any optimized supply chain you need some traceability of all your components. The expectation would be to have traceability at the instrument level, but we saw value in traceability at the component level for better diagnosis and root cause analyses. You don’t want to end up in situations where subcomponents pass testing at that level, but you get to the staging production at the system assembly level and you’re not getting the performance you expected.

We just had to figure out a way to make that happen, to align those measurement techniques across the various phases of the supply chain, in a repeatable, easy-to-use way. To build in the metrology, if you will. If we could improve that test process for instruments in the first pass, we would expect yields to increase at the system level…a solution that in turn becomes a source of value creation.

Could you describe the solution for our readers, please?

Typically, measurements performed on signal generators involved the use of a power sensor and vector network analyzer (VNA) configuration. The power sensor is used to measure output signal characteristics of the signal generator or device under test, and the VNA is used to measure the output impedance of the signal generator. But the traditional approach wouldn’t let us measure with the accuracy we needed from a signal generator in an “on” state. We were trying to validate signal generators that contained active components, such as amplifiers, that exhibit nonlinear behavior with the signal generator characteristics being generated using linear measurements and models at that time.

The solution was to bring in the X-parameters, invented by Jan Verspecht [see Refusing Limits in Active Device Characterization], that Loren had worked with during his Ph.D. research when developing nonlinear measurements on VNAs years earlier. The X-parameters are used to model the relative phase and amplitudes of signals [such as harmonics and intermodulation] generated by nonlinear components under large input power levels at all ports. By utilizing nonlinear VNA measurements to characterize the X-parameters of the signal generator, the X-parameters provided figures-of-merit [including amplitude accuracy and output impedance] with unprecedented accuracy and measurement speed.

What did it take to make that happen? We’ve got a very engaged team on this call today. Bernadette?

The solution required a very distributed team because the problem was distributed, right? We needed deep metrology expertise to address the accuracy and traceability. We were going to be generating a lot of data, so we needed someone who could expertly address uncertainty. We needed the manufacturing specialists to ensure that we would be making these measurements more quickly with fewer connections. We needed the software wizards to implement it.

The solution required a very distributed team because the problem was distributed

And at some point, we were going to need someone with marketing know-how, considering we were going to have to communicate the value of high-powered, future-proofed features that the user might not see a need for at the time of purchase. Our program touched on a little bit of everything we do. All of our unique skillsets. That’s really cool!


Yeah, thanks, what Bernadette was saying made me think of another point about the motivation of our team. You see, our R&D folks here are dedicated to designing instruments with some amazing capabilities. But we can’t reference those claims or put them in a datasheet if we don’t have the ability to verify them in our production testing. So, if a researcher realizes that our customers will someday need an analyzer that can handle 60 GHz for 5G backhaul, then we have to find a way to validate and test that design even if there’s not a mass market for it yet. We are motivated to be able to confidently claim the performance that we designed. I think our customers understand that conviction quite well—test engineers don’t want to be the reason that you can’t hit some design spec or company objective.

We have to find a way to validate and test that design even if there’s not a mass market for it yet.

Bernadette, what insights do you gain as users of our own equipment?

We’re learning a lot as we optimize our workstreams. The optimized test techniques out of this program are very compelling, very pervasive, where we see them being used companywide now across our portfolio. This process makes you stop and think about how our customers might use our instruments to test their own devices. It’s a huge learning opportunity when you can step back and see just how wonky a calibration process can be—to isolate the place where it’s hard to make sense of or what to do when things aren’t working quite right. We take our experiences as users of our equipment back to our R&D folks and ask them to rethink some of the processes and procedures that we had assumed as designers would be intuitive to users.

In retrospect, Mike, were you engineering to fix a customer problem, or trying to prevent one?

Well, that should be an easy question to answer, but it’s not really. It seems to us that the more complicated the technology gets for the customer, the less certain they are about what level of accuracy they actually need. Their targets get more aggressive compared to their current capabilities. Intractable problems are not impossible…it becomes a question of how much finite time and resources would be required of us to invest to solve them. But having a team with such breadth and depth gives you the confidence to invent the bigger, leapfrog improvements, without tripping over what’s right in front of you.