Over the years, the introduction of more complex tools has enabled much more capability in the circuit simulation arena. These new tools enable exploring circuit interaction to arrive at solutions to complex issues. The combination of the complexities of the models for these sub-micron technologies and the need to come closer to the idea of first-pass success is driving the need for simulation tools to do more and to explore various simulations not possible in the past.
In addition, the cost of producing products in these nanometer technologies is limiting the number of spins of silicon to obtain the correct yielding design at adequate margins for the cost of the product. The cost of production in these nanometer technologies is becoming prohibitive for the luxury of several spins of silicon to fix circuit issues. Getting it right the first time is even more important.
Issues that come from interactions on the IC and the complexity of the analog functions integrated along with large areas of digital circuits cannot be underestimated. These interactions create an environment where the probability of success on the first pass of production diminishes as we move down the technology path.
As a result of the integration of these complex analog and digital circuits into a single IC (in an effort to create embedded solutions on a single substrate), the need to simulate more of the embedded system and individual block interfaces more thoroughly is critical to first-pass success. These complexities require the need to simulate the statistical variations, as well as other complex circuit parameters, before the first silicon is released.
The desire to simulate these various interactions for higher first silicon success places more demand on using accurate simulation test benches as well as accurate transistor models. In my blog, Where Are Your Analog Design Models Accurate? I speak about how models do not always fit the size of transistors used in analog design. This means that the large number of simulations the designer chooses to run is only as accurate as the models allow — and trust me, even accurate models do not catch all of the interactions. Because of the inaccuracies in the transistor models and insufficient simulation test benches with proper interfaces and/or operating conditions, all of this tends to give weight to the old saying regarding simulations: “garbage in equals garbage out.”
Therefore, we as designers cannot be simulation jocks, but must understand the trends and have a clear understanding of what the circuit sensitivities are for the blocks being developed. In one of my previous blogs, What Happened to Napkin Sessions? I pointed out the need to complete back-of-the-envelope-style calculations to gain this understanding.
The use of these first-order calculations may not be accurate, but it can provide the engineer with critical insight into the trends expected in the circuit. Moreover, these first-order insights may provide some guidance to question the results of the simulations and to question whether the results make sense, or whether there are potential modeling issues.
What do I mean by simulation jocks? I, like many of you, have been in design reviews that show massive simulation results generated by complex tools. The engineer may present massive quantities of data without understanding the trends the data are suggesting about the sensitivities simulated in the circuit blocks. This is who I would call a simulation jock. Having said this, I do not mean to insult, but to call to our attention the potential for all of us at various times in our design cycles to become simulation jocks.
The tools and horsepower of the computer farms used to simulate circuits have enabled massive simulations once thought impossible to complete. These trends are great, but I fear that this capability can turn engineers into simulation jocks. How do we avoid becoming simulation jocks? I would like to suggest several ideas:
- We can learn a lot from looking at operating points in an analog circuit. Explore the transistor operating points and try to understand critical parameters where you know your circuits are sensitive to variation.
- Think temperature. How does you circuit vary with temperature? Can problems be avoided by using the correct temperature-compensated bias current or voltage?
- How far are the transistor operating points from unwanted regions once the circuit is operating in normal mode? Do you have adequate margins for node variations in normal operation?
- When you run your simulations, ask yourself if the results make sense under normal conditions. Do a mental “what if” exercise. For instance, what if this transistor's operating point changed by 10%-20%? Would the circuit performance be susceptible?
- Do not launch extensive corner simulations until you understand the sensitivities of the circuit under nominal operating conditions.
- Based on the understood sensitivities explored under nominal conditions, try to focus your simulations to explore those corners where you expect are most sensitive to variation.
Furthermore, we must always avoid the tendency to rely too much on the simulations and not spend the time analyzing what the data is suggesting when looking at the results. As engineers, we should accept that a little pessimism goes a long way in making great designs that are robust and that work in silicon the first time.
Have you seen similar issues at your company? How do you think we can avoid becoming simulation jocks?