All the recent talk on this site about analog/digital/mixed-signal integration has me thinking. In the hands of skilled, responsible practitioners, these powerful tools can produce great results. In the wrong hands, it's like playing with fire, and the results may be tragic.
If you have a high-enough-volume application for a specific function with a known sensor, known signals, known loads, etc., then integration makes sense. Yeah, the cost of a mask set (plus revisions — and with analog content, there will be revisions) is high, so it’s a big bet, but the cost per unit will definitely be lower. Each application has its crossover point in terms of volume versus non-recurring expenses.
On the other hand, if your application is not very high-volume (or you really don’t know, regardless of the marketing department’s really cool PowerPoint deck), or you’re in a startup weighing the alternatives of using the cash in the bank to make payroll this month or blowing it all on a mask set, perhaps the choice is not quite so clear.
The decision on digital integration is a lot more straightforward. If there is a programmable processor in the mix, many bugs can be patched in software — that’s sometimes hard to do if there is an “analog bug” or shortfall in accuracy. If you’re thinking about integrating an analog input, get very nervous if your designer (or your own internal voice) says something like: “I do 100,000-transistor chip designs all the time. Even though I’ve never done an analog design, I think I can knock out one of these sigma-delta converters. I’ve read a few papers, and think I get it. No big deal, couple of amplifiers, maybe a total of 100 more transistors… piece of cake.”
Recipe for disaster.
I recall an incident when the first samples of a custom power-management chip got to the customer, who found that the startup sequence got scrambled under certain unforeseen circumstances. There were two choices: an all-levels change to the chip (read: months and tens of kilobucks), or… wait for it… a patch to the system software. The customer’s engineering team developed a workaround in a few hours, and software solved an analog problem.
Incidentally, I notice that most of the discussion has revolved around integrating “classical analog” on a mostly-digital chip. Nobody has raised the issue of the “fringe” analog functions like RF, power management, and sensors (e.g., MEMS). But there’s a huge push to integrate that stuff, too.
I recall visiting a customer once to introduce a line of wireless chipsets. Our product for the then-current mainstream standard was single-chip, while the next generation, with much higher performance needed in the RF and mixed-signal sections, was three chips. The customer asked why we didn’t go directly to a single-chip for the new standard. I explained the advantages of partitioning the system into technologies that best fit each part of the system — digital processor in the best available fine-line CMOS logic process; mixed-signal in the most well characterized analog CMOS process; and RF in a BiCMOS process for lowest possible noise.
He thought a moment than asked how soon we could get it all onto one chip.