You undoubtedly know that 3D printing — also known as additive manufacturing (AM) or rapid prototyping (RP) — is a hot topic these days. The publicity and hype almost seem that whatever your prototyping or manufacturing problem may be, 3D printing is the answer.
In some ways, this attention is well justified; just read this amazing story of how it was used to create an implantable, bio-resorbable temporary tracheal splint for an infant: “Baby breathes easy with laser-printed air tube.”
At the same time, we all know that sometimes the hype gets ahead of the reality. That's why I especially enjoyed an article in a recent issue of Desktop Engineering: “Point-Counterpoint: Additive vs. Subtractive Rapid Prototyping.”
In the piece, proponents of AM made the case for their approach, while proponents of more traditional manufacturing techniques, such as milling and drilling, did the same for theirs. Of course, the reality is that neither technique is the right or best one in all situations; the correct answer depends on many factors, including material to be used, unit volume, time to market, and cost, just to cite a few. As usual, it's largely about the tradeoffs and constraints.
It's somewhat the same in the IC world. Integrating “more” onto a single die has been the unstoppable trend since the first days of the IC, where the “more” can encompass a wide range of attributes: more functionality, more memory, more peripherals, more buffers, more I/O, more of whatever the end-application can use. Not only does this make a lot of sense, it makes a lot of products possible, no doubt of that. Without bigger ICs with more of everything and anything, many of the devices we take for granted, ranging from small smartphones to large computers, would be impossible to build with the features and price we want to meet.
Still, there are many times when integration isn't the answer. For example, if application requires a precise analog front-end amplifier or signal conditioner, designers often go with a single-function, basic IC having few or no other features beyond its primary one. That's why you see new products being introduced by the analog vendors such as op amps with ultra-low bias current for sensor inputs.
Yet sometimes, the push to use an IC with “everything” integrated onto it, and avoiding that single-function, high-performing analog component, is pretty strong. Management may insist that a more highly integrated part is the better choice, and sometimes, they are right: Perhaps the deficiencies — if any — of the integrated part can be overcome by better layout, lower-noise supply rails, some clever algorithms, or a more elaborate calibration approach, thus simplifying the BOM and lowering the cost. As in most engineering situations, the correct answer is, “It depends.”
That's why I'd like to see a one-on-one debate or perhaps a panel with IC designers, circuit designers, and other knowledgeable design experts doing a “point-counterpoint” on this topic, and real case histories as examples of what worked when, and also what didn't work out. I am fairly certain it would be both interesting and lively.
Have you ever been involved in this sort of technical dispute? How was it resolved? What was the outcome?