It's not news that film/chemical-based photography is dead for almost all practical purposes, done in by the many virtues of digital cameras (no need to repeat them here, you know them well).
But there's an ironic twist to the story: Just a few years after digital cameras beat film photography into the ground, the stand-alone, low-end digital camera itself is under assault and is a quickly shrinking business. A recent article in The Wall Street Journal (sorry, it may be behind their paywall) had some numbers on the decline and fall of the basic digital camera. Two reasons for this: First, the cameras are pretty good and reliable, so there's less need to replace them; and more importantly, the ubiquitous use of smartphones with their embedded camera function of apparently sufficient quality for most people — plus the all-in-one convenience aspect.
In response, the camera makers are trying to go upscale, pushing cameras with more features, including GPS, WiFi, SLR-like functions and flexibility, interchangeable lenses, and more. Sony has even come out with clever attachments, the DSC-QX10 and DSC-QX100 accessory lenses, which bring some higher-end camera and lens capabilities to the smartphone.
What we have here is the next step in ongoing absorption of standalone products and their functions into a single central unit as the camera, GPS, voice recorder, and even traditional wallet are fading away (or is it assimilation? — can you say Borg?). It's not a new phenomenon: The desktop PC intended for business and office use soon acquired peripheral plug-in boards which let it take over the role of existing standalone, dedicated data acquisition and control units, test instrumentation, and other engineering and industrial functions.
We see the same situation on the smaller scale of ICs, as well. At first, functions closely linked to the A/D or D/A conversion core were integrated on-chip: digital I/O buffers, voltage references, signal scaling, conversion clocks, and more. It made sense, since you needed these for the conversion function.
But now we have passed well into the next phase of mixed-signal IC integration, where disparate functions are also put onto the same chip, making the component more application-specific and also becoming, to use a cliché, a “system on a chip.” For example, an ADC may also have a processor for data analysis, a high-level protocol and interface such as Ethernet for communications, system-level power management, and more.
Is this good or bad? As always, the answer is: “it depends.” For many designers, such highly integrated components are a good thing: They simplify the circuitry, reduce footprint, and dissipate less power. But they can also lull you into not seeing what compromises, if any, were made along the way, unless you look carefully at the data sheet and applications support collateral. For example, are SNR or ENOB sufficient for your application, or are they marginal with little headroom or cushion?
There's a parallel in the digital camera situation. A good image-capture depends on not just the physical resolution count (number of pixels) of the sensor array, but also the light-capture performance of the pixel elements themselves. Even more critical, a bigger and better lens is a major factor of actual image quality, and that small smartphone lens is far less able to capture light, and do so without chromatic aberration and distortion, than even a basic digital-camera lens — and mid-high range digital cameras have even better lenses, of course.
As for me, I'm sticking with my low-end (<$100) digital camera, for several reasons: I'm "old fashioned" (seems weird to say that about digital cameras) and like the form factor; I occasionally want to use a tripod, and even this basic camera has the tripod screw mount; and most important: zoom.
That's where it's easy to fool yourself with a smartphone camera. When you do a digital zoom, you literally throw away resolution: a 2X zoom cuts actual pixel resolution down to one quarter of its initial value, 3X zoom cuts it to one ninth. In contrast, an optical zoom doesn’t degrade resolution at all.
As always, the engineering decision on how much integration makes sense is not an easy one. But it's a conscientious engineer who looks past the “all-in-one” headline to see what compromises, if any, had to be made, and if they are acceptable.
What's been your experience with tradeoffs of integrated solutions?