I'd like to continue my thoughts on simulators for mixed-signal evaluation, a topic we discussed last week.
Each piece of a design can be verified separately, but there comes a time when the whole thing has to be assembled. This is often the first time it's possible to see if the specification was correct. End-to-end user scenarios must be shown to operate correctly, and this is often when the biggest problems arise for mixed-signal verification. I can think of four possible ways to do this. Each has some advantages and disadvantages. Unfortunately, there is no option that addresses all possible needs or requires no additional effort.
Lowest common denominator
The entire circuit could be modeled at the transistor level and simulated using a SPICE-type program. It may be able to detect connectivity issues, and getting the transistor-level model is a fairly automated process. The biggest problem is simulation performance. Even a simple run would take a long time, and end-to-end user scenarios are probably impossible. In addition, SPICE simulators may not be able to handle the capacity. This option is probably a nonstarter for us.
Assuming that all the analog circuitry could be modeled at the gate level, it would be possible to perform a much faster simulation. Logic simulators are having problems dealing with full chip simulations, but it could be done. It would also be possible to migrate the design on to an emulator or FPGA prototype. However, the accuracy would be very low, and it is unclear if this would accomplish much.
This would seem like the logical choice. Leave everything in the abstraction for which it was designed, and use a mixed-signal simulator. The problem here is Amdahl’s Law, which basically says the whole thing will be brought down to the speed of the slowest component — in this case, the analog simulator. Also, as far as I know, most SPICE simulators are not good at simulating multiple independent pieces of design at the same time. It wants to solve them all as a single set of equations, and this makes the whole thing a lot more complex than it needs to be.
Some people have been singing the praises of behavioral analog for a long time. Using it within a digital, event-driven simulator would make a lot of sense. This also enables higher levels of abstraction for the digital logic, improving the overall simulation performance. It seems like the perfect solution — except for a few small issues.
The first is modeling. Who will create these models, which will be used only for the system verification process, because they cannot become an integral part of the development process? What skills are required of the designers making the models? They must understand analog design and high-level modeling languages. And how are these models verified? At some point, they have to be compared to the low-level models, and small differences between the models can produce different results quickly. This makes comparison a difficult task.
Companies have to weigh the costs and benefits, but nobody seems to think an ideal solution has been found. Mixed-signal languages continue to be developed. Last week, Accellera released the latest version of the SystemC-AMS standard — something I will write about in an upcoming blog.
What methods are you using, and what do you see as the biggest advantages and disadvantages?