Commenting on Steve Taranovich’s blog on CES, I said, more or less, that a lot of the connected technology seemed to have been conceived in the scrambled brains of marketing people. I feel somewhat justified in such a pot shot having had the word marketing in my title several times in my career. Nonetheless, thinking about what would be useful applications, it occurred to me that sensors and intelligent processing could really benefit people with disabilities.
In particular, I started thinking about the visually impaired. I considered how they might benefit from advances in the integration of sensing and communications technologies. Here I'm going to expand on a couple of lines of thought, as a starting point to get feedback from the deep knowledge of the Planet Analog community.
Let's start with a basic challenge that might present itself to a visually impaired individual. Most of us have the freedom to go into our kitchens, stand there with the refrigerator open, and see what we want to eat. We also automatically test leftovers and things that have been in there for a while for quality. I'll bet many of you have done the look test (for mold, discoloration, other visual signs for do not eat) and the sniff test (does it smell bad?). Now imagine someone with almost no visual perception — how is he or she to decide if something is OK? Analog integration to the rescue! Figure 1 shows a simple flowchart for a lunch-decision process. Now let's consider how to do this for our hungry, sightless person.
To accomplish this, we need at least one sensor — a frontend DSP microprocessor — and some way to alert the user of the outcome. Considering we have Google Glass, Wearable Sensors, and the Internet of Everything, I'm proposing a highly integrated solution as the best way to create a product from this. Because of the variety of food and packaging types that might be found in a refrigerator, I want to integrate a hyperspectral sensor to look at the food, plus a secondary optical sensor to read labeling. I'm assuming our person's aroma sensor is working OK, so we won't integrate that.
Hyperspectral imaging goes beyond capturing color images for analysis (which would fall into normal image processing) and collects spectral data at the same time, thus enabling detection of a wide variety of things not visible or poorly discerned by the naked eye. For example, this lecture from the University of Haifa on hyperspectral imaging applications shows you can inspect apples, fish, and citrus fruits for damage and freshness. The spectral wavelengths can extend outside of the normal visual range, from UV at 200 nm well into the infra-red at 2,500 nm. As another, more detailed example researchers at the Catholic University published a paper on hyperspectral imaging of apples to determine quality, which goes into great detail about the methods.
One issue we might have is that the current state-of-the-art in hyperspectral imaging is pretty large systems. Figure 2 shows a bench system from Gooch & Housego, which might be used in research.
(Source: Gooch & Housego)
A bit more portable is Headwall Photonics' micro sensor, shown in Figure 3. This device is targeted at UAVs and similar applications, and weighs less than two pounds.
(Source: Headwall Photonics)
Of course the underlying sensor is really just a CMOS device. Various approaches can be used to split up the incoming light for hyperspectral analysis. Figure 4 shows one approach offered by Teledyne Dalsa, which filters the light onto several distinct sections on a single chip sensor. Another approach uses a prism to split the light into a spectrum that then illuminates multiple sensors.
(Source: Adapted from Teledyne Dalsa's website)
As daunting as this may seem, this amounts to integrating two CMOS image sensors into a small package. To avoid carrying around a computer and software, let's integrate an LTE modem, and optionally a Bluetooth Low Energy link (if we want to, say, use an LTE phone as the data link, and possibly do some processing there). We'll put all of this into a pair of glasses. Then, we can run all the analysis in the cloud. In block diagram form, this doesn't appear so bad.
Most of the blocks in Figure 5 are available in highly integrated form. A first approach might be to integrate the hyperspectral sensor, a camera module (like this from ST Micro), a dedicated signal processing chip (to do preliminary analysis from the sensor, reducing communication load), and possibly another processor (such as this one, to do some character recognition locally), and the modem (how about Qualcomm’s Gobi), plus power, into an SiP design.
Of course, I'm oversimplifying. But most of the technology exists, and with some integration, could provide a platform to help hundreds of thousands of visually impaired people in our society (US figures from the National Federation of the Blind). What do you think? How would you approach the design?