Most people with Internet access — even those well outside of the electronics industry — have heard of Moore's Law. Based on what has been attributed to its namesake, Fairchild Semiconductor and Intel co-founder Gordon Moore, few know what the man actually said, however, and fewer the context in which he said it.
First, Moore's Law isn't a law at all. Though the industry has allegedly stayed on a curve Moore described for five times the one-decade prediction window he offered, there is nothing immutable about the prediction. Yes, it's had an impressive run; no, you wouldn't want to tattoo it on your chest. It does not rank with expressions legitimately called laws, such as those collectively known as Maxwell's Equations or those ascribed to Kirchhoff. Nor did Moore claim any ultimate dependability for his observation.
On this basis, perhaps Moore's Law was an inspired self-fulfilling prophesy — what became the semiconductor industry’s most terse and effective bit of marketing, but, in the 21st century, more marketing than science: Rarely does one see a reference to Moore's Law that correctly states what Moore presented in his seminal 1965 article, “Cramming More Components Onto Integrated Circuits.”
What do they claim? Well, many sources, such as the supposedly authoritative if not particularly technical Merriam-Webster online dictionary, says that Moore's Law states that “processing power doubles about every 18 months especially relative to cost or size.” This would have been particularly clever were it true: At the time Moore wrote his article, the optimal maximum number of devices on a single die was 50 — sufficient for SSI logic and simple analog blocks such as early operational amplifiers and comparators. The first commercially-available integrated-circuit processor — the Intel 4004 — wasn’t seen for another six years, so the issue of “processing power” at the IC level was in no way central to Moore's discussion.
PC Magazine’s version of Moore's Law is that “the number of transistors and resistors on a chip doubles every 18 months.” Its grammatical slip notwithstanding — all chips maintain exactly the same number of transistors and resistors they had when fabricated — it misses the very same critical element that most others do when citing Moore: He wasn’t talking about what would be possible or even what would be common practice. Moore was suggesting that, at any level of IC technology development, there would be an economically optimal number of transistors to cohabit a die and that that number was likely to double every 18 months or so. This isn't a subtle difference: It lies at the core of what distinguishes commercially successful technologies from laboratory demonstrations and existence proofs, which is to say, something just south of a third of a trillion dollars in revenue worldwide.
Playing fast and loose with Moore's prediction isn’t limited to the popular press but, alas, is also an activity in which their expert sources participate. For example, the founding director of HP Lab's Information and Quantum Systems Laboratory, was quoted as saying, “Moore's Law in itself has evolved and morphed in time. It used to be the number of transistors in a chip, but now it means exponential growth in capability on a chip.”
The law has evolved and morphed? Resisting the temptation to try that trick with Newton for just a minute, even were we to agree to a redefinition of terms, how are we to measure this alleged “exponential growth in capability” with functional blocks that are not parametrically or functionally comparable? How are we to evaluate claims that we are or are not on Moore's curve if we've changed the axes along the way? As a good friend and mentor used to say, these are ideas that “melt in the ear, not in the mind” if we are to apply Moore's Law to engineering comparisons and not only as a peg to hold up marketing claims.
How did we, as an industry, lose track of Moore's original insight? See part two of this article.