Problem Statement. Everyone will agree that a key issue in the design of multi-GHz interface devices is jitter performance. To characterize jitter performance, do you measure timing jitter or period jitter of the device? Should the Bit Error Rate performance of the device also be tested? Is there any way to shorten the overall test time? And as always, is there a way to reduce the test cost? Furthermore, is it appropriate to reduce the equipment cost by partitioning test resources on-chip? Which Jitter? First, you have to understand what kind of jitter you should measure for testing your device. In the case of high-speed I/Os, the key measure is not a period fluctuation of the clock signal but a timing misalignment between data sequence and clock signal. Therefore, you should measure edge fluctuations (= timing jitter) in both the data sequence and the clock signal. Only the ∆φ method can estimate timing jitter directly . Furthermore, a spectrum analyzer measures timing jitter as phase noise spectra in the frequency domain. Note that the most accurate instrument for measuring phase noise or timing jitter is not a time-interval analyzer but a spectrum analyzer, which performs its measurement by sweeping a bandpass filter along the frequency axis. The RMS value of timing jitter is related to the area under the phase noise curve . Thus, the RMS value of timing jitter can be easily calibrated by using a spectrum analyzer. Secondly, a very different way to perform jitter testing on a production line is needed. Since it sometimes takes 10-20 sec to perform a jitter tolerance test, almost all data sheets of SerDes devices give only a typical value of jitter tolerance. In other words, production line testing is not done. No currently available method can provide a jitter test solution in production test, either on-chip or offchip. However, a newly proposed method at this ITC enables measurements of both jitter transfer function and timing misalignment simultaneously .