CMOS statistical variability: The skeleton in the closet

For many years great swathes of the semiconductor industry tried to hide their heads in the sand and ignore the messages coming from research establishments concerning the importance of CMOS statistical variability introduced by the discreteness of charge and matter.

First they completely ignored the problem then they tried to hide it. Now that statistical variability is finally entering the public domain, it’s set to hit the fabless and chipless design companies like a steam hammer.

Thankfully several events coincided in 2008 to challenge the status quo. The big CMOS and electronic device conferences such as the VLSI Technology Symposium, and IEDM were flooded with papers solely focused on the issue of statistical variability in 45-nm and 32-nm technology devices. Statistical variability lay at the heart of special sessions focused on the interaction between technology and design. TSMC replaced the traditional ‘total corners’ with ‘global corners’ and started to advise its customers to superimpose statistical Monte Carlo simulations on top of the global corners to capture the effect of statistical variability in design.

The 2008 ‘update’ of the International Technology Roadmap for Semiconductors (ITRS) also introduced drastic changes compared to the 2007 edition. Some of the most important changes were motivated by the specter of statistical variability in CMOS. The former disparity between the number in nanometers identifying the technology generation (which itself is now divorced from the definition of the half-pitch and has become a purely commercial pointer) and the physical gate length, present in the 2007 ITRS edition, practically disappears at the 22-nm technology generation in the 2008 ITRS update. This is to a great extent motivated by the fact that statistical variability almost ‘explodes’ with the previous prescription for substantial over-scaling of physical device dimensions.

In addition, research into new gate stack materials and new device architectures has been mainly motivated by the drive to improve device performance – but not any more. One of the main driving forces behind the introduction of metal gate technology, fully depleted silicon-on-insulator and FinFET devices has been the promise of a reduction in statistical variability.

On top of statistical variability, problems relating to statistical aspects of reliability are looming that will reduce the life-span of contemporary circuits from tens of years to one or two years, or less in the near future. In combination with random discrete dopants, which are the dominant source of statistical variability, the statistical nature of discrete defect charges associated with hot electron degradation and negative bias temperature instability (NBTI) result in relatively rare but anomalously large transistor parameter changes, leading to loss of performance or circuit failure. This is already a fundamental problem in flash and SRAM memories and starts to reduce dramatically the lifetime of digital chips. The irony is that some of the technology innovations, such as the introduction of high-k/metal gate stacks in 45-nm technology generation, which help reduce statistical variability, may in themselves become a reliability time bomb.

First of all the high-k dielectric has lower quality and higher density of fixed/trapped charges. The p-channel high-k transistors are more susceptible to NBTI which can cause an increase in statistical variability with aging. This problem is exacerbated by the creeping positive bias-temperature instability (PBTI) in n-channel high-k transistors which was insignificant in their silicon dioxide gate stack counterparts.

The realization that there is no escape from statistical variability and reliability forces designers to think outside of the box and to find innovative solutions. Such solutions have to cope, not only with the fact that at the moment of fabrication transistors will have a broad statistical variation in their parameters but that during the useful lifetime of the chips aging will cause variability to increase and time to failure will become shorter and shorter if no design countermeasures are implemented.

The urgent need to find design-level solutions to the variability and reliability problems was highlighted in the first call for proposals of the European Nanoelectronics Initiative Advisory Council (ENIAC) Joint Technology Undertaking (JTU) issued in April 2008. As a result a project called MODERN was funded by the European Commission in 2009.

In addition, the National Microelectronics Institute (NMI), the trade association representing the semiconductor industry in the U.K. and Ireland, in collaboration with the U.K.’s nanoCMOS Consortium, is to host its second international conference on CMOS variability, May 12 and 13 2009 at the IET, Savoy Place, London.

Aimed at chip designers, technology developers, wafer foundries and EDA tool vendors, ICCV 2009: “Living with Variability” is set to explore the impact of CMOS variability and how it can be managed at 45-nm and below. Sessions will introduce the issues, discuss the options and share techniques for meeting the challenges of CMOS variability head-on.

The issue of statistical variability in CMOS is being pulled kicking and screaming out of the closet – for all our sakes it’s not before time!

Asen Asenov is a Professor at the University of Glasgow and academic director of its process and device simulation program. He has worked on the simulation of statistical variability in nanoscale CMOS devices.

This story appeared in the April 2009 print edition of EE Times Europe European residents who wish to receive regular copies of EE Times Europe, subscribe here.

See other stories from this issue here.

You can download a digital edition of the latest EE Times Europe print edition here.

1 comment on “CMOS statistical variability: The skeleton in the closet

  1. jsoijfwef
    August 18, 2015

    Looking to buy a home? It's better to be on a “Way” than a “Street,” pick a female real-estate agent and try to be close to a Starbucks.
    That's the advice of Spencer Rascoff, CEO of, who collected statistics from his site's database of 110 million homes to find trends in real-estate pricing. Along with Zillow economist Stan Humphries, he has written “The New Rules of Real Estate” (Grand Central), out Tuesday. Some of his findings:
    The Starbucks effect. Take two identical homes sold in 1997. One near Starbucks would have sold for an average of $137,000, while the same home without a Starbucks would have sold for $102,000. Fast-forward 15 years: the average US home appreciated 65 percent to $168,000, but the property next to Starbucks skyrockets 96 percent to $269,000.
    All renovations are not created equal. The greatest return for your investment is a mid-range bathroom remodel, a $3,000 job that returns $1.71 for every dollar spent. The worst home improvements for value are kitchen remodeling and finishing a basement. A top-of-the-line kitchen reno will cost you $22,000, and you'll only get about $0.51 back for every $1 you spend.
    Use the right words in a listing. Avoid “unique,” “TLC,” “investment” and “potential” — these could lower sale prices by as much as 7 percent. But words like “luxurious” for bottom-tier homes and “captivating” for top-tier homes could add 8.2 percent to your home's value. Longer, more-detailed listings often sell for more.
    “When” is as important as “how much.” In New York, the worst time to sell is the second week of December (listings sold for 2.8 percent less than average). The best time is March, when homes sold faster and for 2 percent more.
    Seven is an unlucky number. Homes with “777” as their address sell for 2.1 percent less than their estimated value; house numbers that just include 777 (such as 17779 Main St.), sell for 1.8 percent less. Oddly, houses with just 7 as their number sell for 1.8 percent more than the estimated sale price.
    Psychological pricing works. Listings with a nine in the thousand digit ($450,000 vs. $449,000) sell anywhere from four days to a full week faster.
    Female agents tend to sell homes faster and for higher prices.
    What's in a name? A lot of cash, according to Zillow's data. Homes on named streets tend to be 2 percent more valuable ­nationwide than numbered ones (unless you're talking about New York City, where it's a wash). But Main Street homes garner 4 percent less than America's median home value. Street names with Lake or Sunset will sell upwards of 16 percent higher. Suffixes also matter. Avoid “Street,” which has the lowest home values of $183,120 nationally, and find a “Way,” which has the highest home values averaging around $312,000.


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.