In my last blog, Is It Real or Is It Analog?, there was an interesting question about the way in which I had written the equation for the trapezoidal integration function. It is not normally written that way. The reason is that it generally assumes a fixed time increment, but in analog simulators the time interval is variable.
This made me wonder how many people are actually familiar with the way in which an analog simulator deals with time and if an explanation may help people write better models. None of this would be necessary if analog systems were in fact linear — and I am sure analog designers would love for that to be the case, but in reality there will always be saturation and possibly even discontinuities. Obviously, they should be minimized for good simulator performance.
Analog solvers work by assuming that everything is linear around the current operating point and will adjust time to try and make this the case. If the solver believes that the result is beyond a certain level of accuracy it will try a smaller time step. In this way it helps to improve the convergence of the solution, but the other side of this is that it now has to solve the equations a lot more frequently.
Simulators have ways in which they both reduce the size of the time step and then on the other side increase it. If a simulator does not converge on a solution with sufficient accuracy within a certain number of iterations it will assume that the time step was too large for the non-linearity it was presented with. At this point it will abandon the computation and go to a smaller time step and recompute from the previous point.
If the simulator gets to the smallest time step and still cannot converge on a solution, it will then issue an error and the simulation stops. But assuming it does get to a point where it can find a solution, what impact will this have on future time steps? While it may be different for different simulators, most simulators will limit the increase in subsequent time steps, typically to the point where the next time step cannot be larger than twice the current time step.
Assume for a moment that a simulation was progressing with a 2ns time step. A non-linear event, such as a stepped input, could cause the solver to drop down to a 1ps time step. It would then take 10 time steps to get back to a 2ns interval (2ps, 4ps, 8ps… 1024ps). That single event will have caused a huge slowdown in simulation speed, a slowdown that may not be easy to find. This is an example of how important good modeling practices are for mixed-signal designs.
In this case, just making that digital input a ramp rather than a step would have resulted in better performance, and at the same time would have been more realistic. Abstraction does not necessarily mean high simulator performance and in this case reality would have helped the analog solver. A digital input being supplied into an analog circuit should always be passed through a transition filter that imparts a fixed slew rate on the signal. This is not only the case for data signals, but control signals as well.
Consider a selector to a mux. If the output is switched from one input to another, even if those inputs are analog, a discontinuity will be created on the output. This again is not only problematic, but it is also unrealistic. Instead, the selector should be ramped so that the output transitions from one input to the other, or the output must be filtered to prevent the discontinuity.
What modeling tricks do you use to improve simulator performance?