Analog intellectual-property design verification, characterization and behavioral model generation are extremely engineering- and computer-intensive processes. Analog IP lies dormant because it is extremely difficult to reuse in its current form; changes in fabrication processes, loss of the original designer's intent and changing design performance requirements all create obstacles. Additionally, a lack of understanding of the range of performance often makes it easier to start a new design rather than to attempt to reuse IP. Automatic analog characterization and behavioral model generation will encourage reuse.
“Silicon-calibrated” behavioral models, derived from extracted net-lists, enable design IP to be distributed, transferred or sold to third parties without the risk of disclosing confidential information. That can create new business opportunities. A predictable closed-loop analog-IP verification and characterization methodology-with automated behavioral Verilog-A and VHDL-A model generation-will create models that can form the basis of an analog-IP business.
The results of the analog-IP characterization process provide the designer with a view of the capabilities and ranges of performance of his design. As the overall design progresses, that view increases in importance, since it shows the effects of incremental changes. Once a design has progressed to the layout stage, such issues as power-supply variations, timing tolerances and loading conditions become critical to the final operation of the IP.
Typically, the characterization of digital IP is limited to the effects on timing and power consumption with respect to changes in the operating environment. For example, the timing and power consumption of an inverter cell can be determined with a set of simple tests that sweep the supply voltage, load (fan-in, fan-out), temperature and edge rates (slope) of the input signal. The only two stimuli vectors that need to be applied are the set of rising and falling transitions. Because of the relative simplicity of the environment, a single test harness can be used to obtain the response of the device under test.
Analog-IP characterization, however, requires a much more comprehensive look at the effects of the operating environment. Such design parameters as phase margin, jitter, gain and offset voltage must be examined with changes in supply voltage, input, loading, temperature and frequency. Since the actual behavior of analog IP in a design is critically dependent on those axes of parameter space, their effect on the IP must be understood.
Characterization of analog IP requires sweeping the parameters, as in digital characterization, but the measurements of many design parameters in analog IP cannot be made in a single test harness or with a single measurement. The test harness required to measure the input slew rate of an operational amplifier is different from the harness required to measure the amplifier's open-loop gain. Complex cells such as phase-locked loops, analog-to-digital converters or clock-recovery circuits (PLLs, ADCs or CRCs) require a far more extensive list of parameters. With the addition of new cell types to the characterization system's repertoire, the number of different test harnesses, stimuli generators, measurements and analyses rises geometrically, leading to unacceptably long simulation and measurement times for an entire library.
Another issue in analog characterization is the grouping and analysis of the results of individual measurements into a comprehensive picture of the cell in operation. Characterizations require that the measured data be fitted to a curve in some way. An example can be found in the relationship between the control voltage and output frequency for a voltage-controlled oscillator (VCO), reported as VCO gain. The standard method for measuring the gain of a VCO is to place it in a test harness, set its control voltage to a known value, simulate it in the transient domain until it has stabilized, and then make threshold-crossing measurements of the output voltage for a set period of time. The measurement of the output voltages then requires analysis to determine the operating frequency.
To make such information useful, the pairs of numbers are typically fitted to some nonlinear equation. Thus, analog characterization can require days or even months of simulation time to complete the characterization for even a simple cell.
There are many methods for reducing the compute time required for characterizing an analog library of components. Distributed processing distributes the workload across many networked computers. While that mechanism works effectively for many simulations, it presents the problem of recombining the characterization data for final analysis and reporting. Unrolled parameter sweeps force the characterization controller to make intelligent decisions about what parts of the simulation to distribute and what parts to keep within a given simulation. Successive decomposition characterizes smaller blocks of the overall cell independently and assembles the results into a picture of the full top-level cell. The method is particularly useful when a cell is reused in many other cells.
Multithreaded simulations segment the simulation itself to take advantage of multiple processor machines. There are two applications of this method. The first simulates electrically independent parts of the design in parallel; the second simulates dependent parts of the design by multithreading the simulation algorithms themselves.
Unrolling parameter sweeps are the most effective. A classic characterization job can be thought of as a set of nested loops that specify the values of each of the parameters that make up the axes of parameter space. The process of unrolling the loops takes advantage of a parameter-sweep mechanism provided in some simulators. The goal is to decide when it makes sense to leave a characterization axis within a parameter sweep and unroll the other loops into the distributed jobs. The process is most effective when loops are unrolled from the inside out.
Generally, most simulations have at least one parameter sweep and are subject to the concept of unrolled parameter sweeps. The more sweeps you unroll, the fewer the simulation jobs (though each runs significantly longer). In a distributed processing environment, the optimum point for unrolling sweeps is related to the number of available processors.
A test harness can be constructed that places two or more ac analysis jobs into a single simulation job. Since the job actually contains two independent pieces, an appropriate simulator can take advantage of the nature of the design to provide one top-level control process, farming the independent portions out to separate processors in a multiprocessor machine.
For a two-processor case, a single test harness with dual “plugs” can be used if the simulator in question can correctly partition the op amps and their measurements into different partitions. Since the stimulus is controlled from within the top-level job and is identical for both instances, the simulation job is somewhat shorter.