In my last blog, Is It Real or Is It Analog?, there was an interesting question about the way in which I had written the equation for the trapezoidal integration function. It is not normally written that way. The reason is that it generally assumes a fixed time increment, but in analog simulators the time interval is variable.

This made me wonder how many people are actually familiar with the way in which an analog simulator deals with time and if an explanation may help people write better models. None of this would be necessary if analog systems were in fact linear — and I am sure analog designers would love for that to be the case, but in reality there will always be saturation and possibly even discontinuities. Obviously, they should be minimized for good simulator performance.

Analog solvers work by assuming that everything is linear around the current operating point and will adjust time to try and make this the case. If the solver believes that the result is beyond a certain level of accuracy it will try a smaller time step. In this way it helps to improve the convergence of the solution, but the other side of this is that it now has to solve the equations a lot more frequently.

Simulators have ways in which they both reduce the size of the time step and then on the other side increase it. If a simulator does not converge on a solution with sufficient accuracy within a certain number of iterations it will assume that the time step was too large for the non-linearity it was presented with. At this point it will abandon the computation and go to a smaller time step and recompute from the previous point.

If the simulator gets to the smallest time step and still cannot converge on a solution, it will then issue an error and the simulation stops. But assuming it does get to a point where it can find a solution, what impact will this have on future time steps? While it may be different for different simulators, most simulators will limit the increase in subsequent time steps, typically to the point where the next time step cannot be larger than twice the current time step.

Assume for a moment that a simulation was progressing with a 2ns time step. A non-linear event, such as a stepped input, could cause the solver to drop down to a 1ps time step. It would then take 10 time steps to get back to a 2ns interval (2ps, 4ps, 8ps… 1024ps). That single event will have caused a huge slowdown in simulation speed, a slowdown that may not be easy to find. This is an example of how important good modeling practices are for mixed-signal designs.

In this case, just making that digital input a ramp rather than a step would have resulted in better performance, and at the same time would have been more realistic. Abstraction does not necessarily mean high simulator performance and in this case reality would have helped the analog solver. A digital input being supplied into an analog circuit should always be passed through a transition filter that imparts a fixed slew rate on the signal. This is not only the case for data signals, but control signals as well.

Consider a selector to a mux. If the output is switched from one input to another, even if those inputs are analog, a discontinuity will be created on the output. This again is not only problematic, but it is also unrealistic. Instead, the selector should be ramped so that the output transitions from one input to the other, or the output must be filtered to prevent the discontinuity.

What modeling tricks do you use to improve simulator performance?

**Related posts:**

To speed up the performance of the model, display only the necessary. This can mean: Limit the number of visible displays and open scopes. For open scopes and displays, fix the decimation to a reasonable value. Load the model in memory using load system instead of open system and simulate it using the sim command, then post process/display the outputs.

These are all good additions to ways to speed up a simulator. Disks are slow and everything you can do to reduce the amount of disk access you are doing will help.

Is it, with primitive digital elements that use timing models and the built-in 12 or 16 state digital logic simulator…kind of mixed mode simulation.

I am not sure I understand the question. This does not relate to digital simulation which works on a completly different principle. This is about analog solvers and in particular mixed-signal simulation where digital events may get passed across the interface to analog components and the problems that it causes.

@Brian: I guess i mixed it, i was talking about mixed-signals only…

Hi Brian–I really like the subtlety you have exposed here. I was thinking about this in terms of a different abstraction–an infinite sum of different frequencies to approximate the step change. This was discussed elsewhere, and your blog set me to thinking that by smearing out a step, which, as you point out, may be more realistic anyway, also reduces the number of frequency harmonics you have to include to represent the steps. In other words an FFT of a true square step has significant power at several multiples of the digital frequency, but smoothing it out a bit would diminish the amplitude of higher frequencies. I may be way off in the boondocks here but that makes intutive sense that smoothing a step will allow solution convergence with larger time steps due to lower frequencies in the FFT.

Am I nuts or does this analogy make sense?

@Brian I think there's a need to identify simulation bottlenecks, this is the root of the problem, if the application modes does not provide the speed you require, then you need to launch a simulation model advisor to identify possible areas of bottleneck and eliminate them.

@Brian I think there's a need to identify simulation bottlenecks, this is the root of the problem, if the application modes does not provide the speed you require, then you need to launch a simulation model advisor to identify possible areas of bottleneck and eliminate them.

I think the analogy is good. A square wave into any analog component is going to present difficulties. By thinking about it as requiring a set of harmonics works well in this case because it means it has to deal with higher frequencies and thus naturally would require the simulator iteration interval to be reduced. It would not lead you to directly to an understanding that it would then take the simulator a lot of time before it would accept that the frequencies it has to deal with have gone down over time and the simulation speed wastage that would have occurred.