Advertisement

Article

What you need to know about imaging solutions for camera phones

The ultimate digital convergence platform: the camera phone
The volume of digital photos people are snapping today is dramatically reshaping the entire digital imaging ecosystem. Social networking Websites such as MySpace and YouTube have driven enormous increases in image uploading, while the availability of photo- and video-sharing sites like Flickr and iMovies have spurred similar amounts of image downloading. And today, uploading and downloading of photos and video occurs not just to and from PCs, but among a wide range of devices–PCs, laptops, PDAs and camera phones.

The camera phone is the most disruptive of these devices. Analyst firm Gartner notes that camera phones accounted for 48 percent of total worldwide mobile phones sales in 2006 and will jump to 81 percent by 2010.1 Unit sales of camera phones will exceed one billion by 2010.2

Today, camera phone sales are surging as digital camera sales are stalling. Among the reasons: digital camera sales are reaching a saturation point with current technology. Multiple studies have shown that the average U.S. family of four already owns two or more digital cameras. On the other hand, improved camera phone technology, analyst firm Strategy Analytics notes, will result in camera phones capturing 15 percent of the low-end digital still camera market by 2010.3

Poor picture quality, limited camera phone memory and lack of an “ecosystem” to move pictures off of the phone and onto PCs or other storage devices might have made camera phones little more than a toy or a fad. However, a new wave of emerging technologies have encouraged early adopters to use their camera phones and have attracted a new set of users that would not have purchased a camera phone when their features were limited. As a result, just as digital still cameras (DSCs) dramatically shrank market demand for film cameras, new and improved camera phones are impacting sales of DSCs.

A recent survey by German optics firm Schneider Kreuznach polled 1,000 users in the U.S., Germany, China and India regarding their usage patterns. Highlights of their findings include:

  • One out of four respondents indicated that in the future they would exclusively use their camera phones for picture-taking (early adopters), provided the quality matched that of today's upper mid-range digital cameras with approximately 6 million pixels.
  • Under certain circumstances, 43 percent would be willing to replace their digital camera with a suitable camera phone. At present, only 32 percent would still prefer a digital camera.
  • Users in India and China were particularly open-minded towards camera phone photography. In these countries, nearly eight out of ten of those questioned (79 percent) could imagine using only camera phones for picture-taking in future.
  • While in India and China more than half of all respondents (60 and 52 percent respectively) already take pictures with their cell phones several times per week and in the USA more than a quarter (26 percent), Germany has the lowest number of so-called 'power users' (12 percent) and at the same time the highest number of non-users (59 percent).4

As the digital convergence trend continues, it becomes increasingly important for design engineers working to develop camera phones to have a strong understanding of what matters most when selecting an image sensor for a camera phone and stay informed of the latest image sensor technology developments that will enable camera phones to deliver higher quality images to consumers.

Choosing the Right Image Sensor
There are many criteria that can be used to choose an image sensor. Some of them are qualitative/subjective and some are quantitative/objective. When shopping for a CMOS image sensor for a camera phone, there are a few important “must know” quantitative metrics to keep in mind. They are:

  • Pixel density–pixel density is a fundamental part of an image sensor's performance; the more pixels the sensor has, the more detailed the picture can be.
    But when selecting an image sensor, designers should not look at pixel count alone. As pixels decrease in size, they also decrease in performance. In order to add more pixels to sensors without compromising image quality, sensor vendors are working with a variety of new technologies to enhance pixel performance, and designers should be aware of what pixel technologies their vendors are using to boost performance as they shrink pixel size.
  • Sensitivity–sensitivity measures the response of the sensor to light stimulus. It is often measured as mV/luxsec.
  • Signal-to-noise ratio (SNR)–SNR is the logarithm of the ratio of a signal level to the standard deviation of the signal level. It measures the noise performance of the sensor. Noise as referred to here is a combination of readout noise, shot noise, dark current noise, fix pattern noise, temporal noise and others. The SNR is dependent on test conditions such as frame rate/integration time, luminance conditions and test target.
  • Dynamic range–dynamic range measures the ability of an image sensor to adequately capture both bright and dark objects in the same images, and is often defined as the logarithm of the ratio of highest signal level to the lowest signal level (the noise floor level), with 54 db being the common specification for commercial image sensors. Image sensors with a wide dynamic range typically provide better performance in bright light environments (i.e. pictures taken in bright light environments appear “washed out” or blurry when taken with a sensor with poor dynamic range).
  • Color representation–color reproduction is often quantified in terms of CIE Lab color space. The color space is a mathematical method to map human color vision in a three dimensional space where each color has a coordinate assigned to it. The color representation accuracy is determined by measuring the difference between the coordinates generated by the sensor and those generated by the human eye for the same color. The key metric used to measure color reproduction accuracy is “Delta E.” Delta E is the spatial distance in the color space. A Delta E of below 10 would be a typical specification for most camera phone image sensors.
  • White balance–white balance refers to a sensor's ability to accurately reproduce colors in changing light environments. Most camera systems have an auto white balance function, which can automatically change the white balance as lighting conditions change. Design engineers should look for image sensors equipped with a good auto white balance (AWB) control that provides right color reproduction.

Image sensors are usually available in one of two formats: a raw data sensor or a SoC sensor. Both formats offer certain advantages and disadvantages when compared to one another.


Figure 1. Elements of the image-processing pipeline

Raw image sensors require a peripheral image-processing engine to handle elements of the image-processing pipeline not provided by the sensor (such as demosaic, color space conversion, gamma conversion, white balance adjustment and output formatting, etc). While the use of a peripheral image processor can require a significant increase in the overall camera system's size footprint, this configuration has traditionally offered the highest quality images. However, many of the cellular baseband chips available today integrate an image processor on-chip to give camera phone designers using raw image sensors a processing solution that doesn't require another peripheral chip.

The SoC sensor integrates many of the image processing elements mentioned above into the same chip as the sensor. Integrating these image pipeline elements onto a single chip gives designers an easy-to-use, low cost imaging solution with a small design footprint; a significant benefit for designers looking to engineer a camera phone with the small, sleek form factors today's consumers demand. However, due to their limited processing power, integrated SoC sensors provide pictures with limited quality in comparison to those generated by a raw sensor paired with a peripheral image processor. This may change in the future, as the semiconductor process technologies used to manufacture image sensors move to smaller, more efficient process geometries, like 90- and 65-nanometer. With these smaller processes, SoC image sensors will be able to provide higher performance with even more image processing power and elements integrated on-chip. In fact, many of the features consumers enjoy in their DSCs, such as image stabilization, JPEG compression and subject recognition, are being integrated into SoC image sensors available to camera phone designers now or in the near future.

Beyond the Image Sensor
The image sensor has to function as part of a camera module system consisting of a lens, infrared filter and lens barrel. As such, overall image quality is not the sole responsibility of the sensor. Camera phone designers need to be familiar with a variety of performance metrics determined by other components in a camera system, including:

  • Sharpness–sharpness is defined by the boundaries between zones of different tones or colors. One way to measure sharpness is to use the rise distance of the edges of subjects in the picture. Camera modules measure sharpness in terms of modulation transfer function (MTF) in the spatial domain or spatial frequency response (SFR) in the frequency domain. Most sensors will use a filter to enhance the perceived sharpness, but this can often introduce more noise into the image.
  • Resolution–resolution is the ability of the camera module system to distinguish finely spaced detail. In addition to the image sensors pixel pitch, which determines the spatial sampling rate, the module's lens system plays a critical role here. A lens with a higher MTF often provides a higher resolution, but the lens has to match the sensor's spatial sampling rate (known as its Nyquist) to avoid an aliasing signal (noise added to the signal by oversampling).
  • Z-height–refers to the thickness of the camera module. As camera phone designs become thinner and thinner, z-height becomes an important factor in camera module selection. Z-height is dependent on the camera's optical format (pixel density) and the chief ray angle (CRA) of the sensor. Using a larger CRA can reduce the total height of the module but it causes cross talk between pixels and signal fall off, a side effect referred to in the industry as “lens shading.”
  • Focus–the majority of cell phone camera modules have a fixed focus with a focal distance range of 40-60cm. However, some high-end models use miniaturized auto focus modules that can focus clearly from 10cm to a few meters away. However, unlike a DSC, the auto focus module in a cell phone has to pass a rigorous durability test or “drop test.” To meet the demands of the drop test and lower cost, sensor vendors have developed an “auto focus module” with no moving parts. The most commonly used technology used in these auto focus modules is called extended depth of field (EDoF). It uses digital image processing to reconstruct the image at different focal planes to achieve a focus ability similar to what is seen in auto focus modules with moving parts.

Within the image sensor industry, more and more image sensor vendors are devoting design, engineering and manufacturing resources to creating camera modules in the silicon fabrication line to simplify the supply chain of camera modules. Because of their familiarity with silicon process, these image sensor vendors can create camera modules that module makers cannot. For example, by October 2007, chip scale camera modules, based on innovations in module manufacturing technology by Toshiba, had a size footprint up to 64 percent smaller than competing modules in the same performance class5 . By owning their own module production process, Toshiba is also simplifying the supply chain and shortening lead times, much to the benefit of camera phone designers and manufacturers.


Figure 2. A comparison of the structures of a conventional camera module and a chip scale camera module

Lyra Research notes that by late 2008 or early 2009, “The cumulative number of camera phones shipped will surpass the cumulative number of both conventional and digital cameras shipped in the entire history of photography–and camera phones will have been on the market for less than a decade.“6 As the march of new technologies continues, even today's camera phones will give way to “intelligent” phones that utilize their image camera subsystems in ways beyond simple still image or video capture. Features such as image recognition–where the camera will “recognize” a person or scene being photographed and group like photos together or, in conjunction with GPS functionality, use a picture of a distinctive landmark to provide consumers with directions, will bring new applications to camera phones that we couldn't dream of today. The potential for new applications for camera phones is limited only by designers' imaginations.

References
<> iT Wire, “Every Cell Phone a Camera-Phone Soon, Says Gartner” November 6, 2006.

2 iT Wire, “Every Cell Phone a Camera-Phone Soon, Says Gartner” November 6, 2006.
3 Digital Camera Info.com, “Camera Phones Outsell Digital Cameras” by Emily Raymond.

4 Let's Go Digital.com, “Cell Phones Replace Digital Cameras” by Ralf Jurrien, February 11, 2007.
5 Release dated 10/1/2007, “Toshiba Launches New Line of Ultra-Compact Camera Modules Featuring Dynastron Image Sensor Technology.”
6 Lyra Research, “Pictures at Hand: 2006 Worldwide Camera Phone Market Report,” Introduction, October 4, 2006.

About the Author
John Lin , is a Sr. Design engineering manager for the System LSI Group of Toshiba America Electronic Components, Inc. (TAEC). John has 18 years of experiences across product development, project management, and technical marketing in Japan and US. He is now responsible for the field application of CMOS image sensor business of for the Imaging and Communication Marketing BU of Toshiba America Electronic Components, Inc. Limited. He holds BE and MS from Tsinghua University in China and MBA from Marshall school of Business of University of Southern California.

Shri Sundaram is manager of Business Development for the System LSI Group of Toshiba America Electronic Components, Inc. (TAEC). He is responsible for product marketing of the Imaging and Communication ICs for the North American region. He has over thirteen years of experience in various engineering and marketing roles in the semiconductor, telecom, and IT industries. Shri holds a Master's degree in Business Administration from Thunderbird School of Global Management, Glendale, Arizona, and a BSEE degree from Birla Institute of Technology and Science, Pilani, India.

0 comments on “What you need to know about imaging solutions for camera phones

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.