Advertisement

Blog

Does Analog Integration Really Need to Go Beyond 0.18μm?

The move to scale down to 0.18μm was based on the following: Reduce the feature size in the front-end of a device. Reduce the back-end or interconnect. Add more interconnect layers. Doing so would greatly increase the density of digital circuitry and reduce the intrinsic gate switching delay.

With the smaller geometry, the supply voltage had to be decreased to prevent voltage breakdown. Unfortunately, this led to an increased gate-switching delay in digital circuitry and lower dynamic range in analog circuits. Process designers lowered FET gate threshold voltage to compensate for the switching delay problem. In the analog realm, scaling did not bring much area reduction. However, it did yield higher-speed transistors, which led to silicon implementation of RF circuitry and high-speed analog blocks such as ADCs and DACs.

The problems in going below 180nm or even 90nm CMOS for analog can include further reduction in supply voltages, design productivity, and signal integrity. If you ask a group of designers the question in the title of this blog, you will get many different answers and opinions. It depends upon what you are trying to achieve in a highly integrated design. Note that engineers still do take the simpler approach and use single op amps and discrete transistors in their designs. Just as Bob Pease always reminded us: “KISS.”1

It really comes down to what you need to achieve in a system design and whether a highly integrated IC fits in your architecture. Complex systems on a chip (SoC) with mixed-signal design, embedded high-performance analog, and sensitive RF front ends combined with digital circuitry have achieved such breakthroughs as base stations on a chip. Smaller is better here; but smaller is not always the best way to go for all designs.

Analog circuit design challenges in nanometer technology
One main effect, and on the surface seemingly an advantage, is reducing power-supply voltages by scaling to a smaller line size. Scaling will not necessarily bring large-area reductions in analog as it does in digital. The active area of analog transistors is determined by kT/C thermal noise or mismatch-induced offset constraints. Such constraints can impede dynamic range and accuracy, depending upon size.

See Equation 1 for the relationship2 between achievable speed, dynamic range and power. The term “technconst” is an arbitrary constant:

(Speed x Accuracy2 )/Power = technconst

So with respect to thermal noise, the constant on the right side of the equation depends only upon temperature. In the case of mismatch, the amount of mismatch in the technology process used will determine the outcome. See Figure 1 for a plot of these relationships for a real technology process.

Figure 1

Thermal noise and mismatch limit in the power-speed-accuracy tradeoff governing analog circuits. (Image courtesy reference 3)

Thermal noise and mismatch limit in the power-speed-accuracy tradeoff governing analog circuits.
(Image courtesy reference 3)

So we see for Figure 1 that for untrimmed or uncalibrated circuits, the mismatch limit determines the minimum required power consumed for a particular speed and dynamic range spec. The red squares in the graph mark real ADC designs.

As the technology scales, transistor mismatch will improve slightly. So if a designer needs the higher speed gained by the scaled technology, he will have to accept the increased power penalty at the same dynamic range.

For a fixed speed and accuracy, power would decrease from improved matching were it not for the reduced power supply voltage that comes with the nanometer technology. Now input-signal range headroom is reduced, and this in turn affects thermal noise and offset by needing tighter constraints there.

Next, how is analog design productivity affected by technology scaling? Since analog design is still considered a “black art,” a great many analog designs are carefully done by expert analog designers who take into account the myriad variables that can affect a good-performing analog circuit. Hence the design time for the analog portion takes longer and is more likely to have errors than their digital counterpart, which has better simulation and auto-routing capability in most cases.

Better analog CAD tools are in great demand
Finally, signal integrity can suffer as technology scales. Analog and RF circuitry are very susceptible when on the same die as “noisy” digital circuitry. Crosstalk — either radiated or conducted — can wreak havoc on sensitive analog circuits. Digital switching transitions contain significant harmonic energy due to their fast transitions. These upper frequencies may propagate through the shared substrate.

I firmly believe that these are not insurmountable problems in the integration of analog as next-generation scaled technology progresses to smaller and smaller lithography nodes. I am excited about what is to come as we break through paradigms of the present and move into the promising future of integrated electronics.

Just remember that there are still so many areas where reduced technology nodes are needed for analog performance, power, and die footprint improvements. The Ft in transistors of 350 GHz and beyond are possible now. These will see use in such circuit functions as LNAs (low noise amplifiers) and ultra-high speed data converter designs (which will also require higher data rate interfaces, by the way).

As good examples, take a look at reference 4, below, to see a 4.5mW, 8-bit ADC at 750 Msps, which could only be done in a much lower technology node like 28 nm. Or reference 5, which shows a 6-bit, 28 Gsps DAC at 90nm. We can go further with good examples that need the sub-μm nodes such as the work that IMEC is doing with Renesas or the base station on a chip that Freescale has developed. (See: Integrated RF Analog, Multi-Standard, Software-Defined Radio Receivers.)

What are your experiences working with sub-180nm and sub-90nm devices? For the analog applications, how did you deal with the limited available voltage range?

References:

  1. KISS stands for “keep it simple, stupid.” In some attributions, the last “s” has more vulgar equivalent. The acronym can sometimes be traced back to army drill instructors' guidance given to recruits being trained.
  2. Impact of transistor mismatch on the speed-accuracy-power trade-off of analog CMOS circuits, P. Kinget & M. Steyaert; CICC, pp. 333-336, proceedings of the IEEE, May 1996
  3. Analog and digital circuit design in 65 nm CMOS: end of the road?, Georges Gielen & Wim Dehaene
  4. A 4.5-mW 8-b 750-MS/s 2-b/Step Asynchronous Subranged SAR ADC in 28-nm CMOS Technology, Yuan-Ching Lien
  5. A 28GS/s 6b Pseudo Segmented Current Steering DAC in 90nm CMOS, Thomas Alpert, Felix Lang, Damir Ferenci, Markus Grözing, Manfred Berroth

Related post:

18 comments on “Does Analog Integration Really Need to Go Beyond 0.18μm?

  1. RedDerek
    August 14, 2013

    I have not worked with stuff down in the smaller size from an IC perspective. But I would gather that as the feature size is reduced, the voltage is reduced and power consumption is reduced and speed can be increased; all good points.

    The drawback is the amount of signal one can use. If, before I had a 3V signal with 10mV of noise, that gives one 2.99 Volts of signal to work with. If the voltage now drops to 2V and the noise is 10mV, that now gives me 1.99 Volts of signal. If I break both down into 1024 chops, the steps do get smaller, but at some point a step becomes “noise” level. Now, if noise can be reduced at the same time, that is good.

  2. goafrit2
    August 14, 2013

    Yes, it has to go below 0.18um because any space saved is money. When you have many  transistors, that is money. However, it is not every application that needs o.18um process in analog design. I have read of MEMs companies moving below 3.3um which does not make sense to me. You can do all you want to do with mechanical beam at 3.3um and still be fine and avoid all the issues associated with sub-submicron.

  3. goafrit2
    August 14, 2013

    >> But I would gather that as the feature size is reduced, the voltage is reduced and power consumption is reduced and speed can be increased; all good points.

    Actually, it does not scale that proportionally. You have this problem that your oxide thickness is not scaling with your VDD reduction. It is a mess at that level and that is why there real problems like higher static power dissipation and gate oxide breakdown among other issues.

  4. goafrit2
    August 14, 2013

    The best narrative for this higher integration is always for digital as Intel is the most prominent firm on this trying to keep the “Moore's Law” alive. Possibly they have two more generations before quantum effects take hold of further miniaturizations. Digital does not have many issues though power is a concern. But analog does not have real value for any meaningful circuit design below 0.18um. The fact is that you are reducing dynamic range when noise is not reduced in the natural operating world.

  5. Steve Taranovich
    August 14, 2013

    You're right—it's a matter of physics and noise limitation—dynamic range always suffers as you go to finer line sizes–someday someone will come up with a way around that—just like we did with delta-sigma converters when they pushed the noise out beyond the signal of interest with thatinnovative design architecture—-no one thought we would ever be able to get true 20+ bit ADCs and DACs

  6. Steve Taranovich
    August 14, 2013

    Alas, an area where digital actually beats analog 🙂

  7. jonharris0
    August 15, 2013

    ” But analog does not have real value for any meaningful circuit design below 0.18um.”

    Great comments and dialogue on this blog, it is definitely an interesting topic.  I'll have to disagree with the comment in particular above.  Designs below 0.18um may not be suitable for some analog applications, but there is definitely meaningful analog circuit design below 0.18um.  Data converters for example I would argue are certainly meaningful and there are converters on the market today that are sub-0.18um.  Again, though, great comments and dialogue.  I enjoyed the reading here.

  8. SunitaT
    August 20, 2013

    With the scaling down of supply voltage and CMOS technology below 90 nm, leakage power plays a growing role in the overall power dissipation. Future cryptosystems essential to address this trend, though it has not been of concern yet in low- cost cryptosystems such as smartcards and RFED tags which currently use older technologies and low performance transistors

  9. goafrit2
    September 18, 2013

    >> no one thought we would ever be able to get true 20+ bit ADCs and DACs

    Big question – do we really get those 20+ in terms of ENOB, INR and DNR? I think you get 18 bits and the rest is “marketing bits” . The best I have done is 14 bits and I have done all to extend that. Any link on a truly 20+ bit ADC in the market?

  10. goafrit2
    September 18, 2013

    >>   Data converters for example I would argue are certainly meaningful and there are converters on the market today that are sub-0.18um.

    Could you point out one key advantage you have experienced for converters below 0.18um? I know size of transistor is there but what value does it really give you?  For me, you get all the values in digital but not sure if that is the case in analog design.

  11. jonharris0
    September 18, 2013

    Sure, if you want to get higher sampling rates (ie get more speed out of the device) a smaller process node is required.  You have to scale the parasitics down with process to achieve higher sampling rates (ie faster switching (switched cap inputs, decrease settling times and other factors).  As with most anything today, speed comes from reducing size.  Applicatoins are pushing for higher bandwidth and eventually I think direct RF to digital sampling…

  12. Steve Taranovich
    September 18, 2013

    Hello @goafrit2–The IEEE has published papers on sub-180 nm data converter performance. Here are some results:

    A 4.5 mW, 8-bit sub-ranging ADC at 750 Msps in 28 nm technology—The article compares other similar ADCs on 55 nm, 65 nm and 130 nm and the 28 nm is shown to be far smaller die area, far less fJ/conversion,  with comparable power of 4 mW to a 65 nm but far lower than 55 nm and 130 nm. (See reference [3] paper in this blog)

    Check out the other references in this blog and you will see that going below 180 nm give higher sampling rates at lower power with smaller die size

  13. fasmicro
    September 18, 2013

    Sure, the lower the feature size the better. That optimal size has not been determined. But the down-scaling cannot continue indefinitely. It is a valid question since we know after a certain size, tunneling, static power and all kinds of issues crop up. 

  14. fasmicro
    September 18, 2013

    >> 180 nm give higher sampling rates at lower power with smaller die size

    Not debatable – the length of a transistor has a direct proportionality to transition frequency of a MOSFET. When it is smaller, you get better speed!

  15. Steve Taranovich
    September 18, 2013

    Hello @goafrit2—Check out Linear Technology 20 bit ADCs #LTC237X-20 series–they have 104 dB SNR and sample at rates above 250 ksps to 1 MSPS; INL +/-0.5 ppm error; DNL +/- 0.1 ppm

    See also Maxim's MAX11156 18 bit, 500 ksps. Its INL and DNL are defined in terms of the least significant bit (LSB). Its guaranteed maximum and minumum INLs are +8 and –8 (± 2.5 typical), and its guaranteed maximum and minimum DNLs are +0.9 and –0.9 (±0.5 typical).

    Not too shabby

  16. fasmicro
    September 18, 2013

    @Steve, good research.

    The INL +/-0.5 ppm error; DNL +/- 0.1 ppm does not help that much but the second  +0.9 and –0.9 (±0.5 typical) means that the ADC technical and practical is not effectively 18 bits. The LT product is not clear with ppm while the Maxim one is around 16 bits useful bits. I cannot understand why LT is using ppm for INR and DNL when the industry standard is LSB.

  17. Scott Elder
    September 18, 2013

    @Steve – Do you know of a specific application that needs a 20-bit SAR and why?  Just curious if this is a bragging rights part or targeted to a specific problem not adequately served by, say, 18 bits.  It's not like the part costs 5% more – It is 100% more. (~ $40 USD)

    thanks.

  18. goafrit2
    September 20, 2013

    >> Do you know of a specific application that needs a 20-bit SAR and why?

    Not that I have designed any – the best I have gotten is 14bit ADC within the +/1 0.5LSB limit. But I know that you may need up to 20 bits in gyroscopes especially if you want to use it to sense falls in medical devices. That granulity gives you margin to help the patient. 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.