The Captain’s Log

A while back (Letting Go of the Old Ways) I took aim at the engineering practice of doing things the way they’ve always been done, rather than opening up to new solution approaches. This should not be taken to mean that we should summarily jettison older techniques in our enthusiasm to take advantage of newer devices or technologies. One such old technique helped me solve a challenge that would otherwise have cost the prospective customer much more money.

I was asked to look at how one of our products could implement an exposure measurement subsystem that could be incorporated into a hand-held exposure meter and even embedded in the light control loop of a photographic flash gun. The spec was tight: Measure exposure over a 16 EV range, to an accuracy of 0.1 EV. For the non-photographers out there, exposure value is a logarithmic scale for the light intensity in a scene, and an increment of 1 EV means a doubling of the light intensity.

The light is captured by a photodiode, and I just needed to worry about converting the wide range of anticipated photocurrents. 16 EVs is a 65536:1 range of current, and a 0.1 EV change amounts to 20.1 or around 1.07x. This means that the ratio of the smallest error to the largest current is just under 1 million (it’s 65536/0.07). So, if we were to take the naïve approach and just digitize the photocurrent, we’d need a 20-bit converter. “No problem!” said a colleague, “We have a fabulous 20-bit A-to-D converter right on our chip!” And indeed we do.

There’s just one itty bitty problem, and the clue is in the phrase “flash gun.” The xenon tubes that are still the most popular light source in such guns light up quite fast — really, we need to be able to grab hold of the light level every microsecond or two, so we can get a good picture of the total light energy emitted. And our fabulous 20-bit converter needs about 5 milliseconds to give us its result. For very short exposure durations, or for regulating the output of the flash over longer ones, it’s no use.

“No problem!” said another colleague, “We have a fabulous 1 megasample per second A-to-D converter right on our chip!” And, again, indeed we do. But this one’s only a 12-bit converter. And that’s way, way too limited a resolution to capture such a wide-ranging signal level.

Now, out there in the world of specialist analog chips, you can get 20 bits of ADC resolution at 1 Msps. The trouble is, it costs around $30, which was three EVs above my total solution budget (work it out!) right off the bat.

“You’ll think of something” was the best my colleagues could offer at this point, so I set out to do exactly that. And the past stepped up to help me, in the form of logarithmic compression — the log amp, in other words. An old-school circuit, described in the classic Burr-Brown textbooks.

Now, the log amp is still alive and well in those specialist analog circles. The devices designed by my friend and hero Barrie Gilbert count among the highest dynamic range circuits it’s possible to buy, and are easily fast enough over the current range I needed to cover. But, again, too expensive for this low-cost application. There was only one thing for it: Just Add a Transistor!

This is a blog, so here’s the bit where I skip to the end. Figure 1 shows the simulation schematic used for exploring the logarithmic relationship of a BJT’s base-emitter voltage against collector current. You’ll notice that diodes have crept in — these are necessary to prevent the transistor from going into saturation if the input current drops off abruptly, leaving the previous base current nowhere to go.

Figure 1

Simulation framework for probing the core current:voltage relationship.

Simulation framework for probing the core current:voltage relationship.

Figure 2 shows a closeup of the worst-case switching behavior, with the input current suddenly falling from 8.192 mA to one-half, one quarter, and so on, right down to 125 nA. For each halving of the current, the amplifier’s output voltage becomes less negative by around 18 mV, as anticipated.

Figure 3 shows how we can actually make some use of this block. We build it twice; one instance carries the photodiode current, and the other carries a reference current. The four Schottky diodes can be had in a single SOT-563 package from Central Semiconductor, and the dual transistor we used was the minuscule ONsemi NST3904. The difference in the voltages at the two emitters, which we can measure nicely with our fast 12-bit converter, is the “thermal voltage” — kT/q times the (natural) log of the current ratio.

We need to take temperature into account, as always when you’re playing around with a PN junction. It’s easy to dynamically reroute things to use the second instance as a temperature sensor to correct the temperature dependence. Combined with some scaling, a bit of linearity correction, and some bit manipulation to do fast exponentiation in the digital domain, and the proof-of-concept was done! And all we had to do was just add a transistor . OK, two, and some diodes. Still, neat, eh?

Figure 2

Each halving of input current causes Figure 1's output voltage to rise by 18 mV.

Each halving of input current causes Figure 1’s output voltage to rise by 18 mV.

Figure 3

Measure the difference between the two emitter voltages.

Measure the difference between the two emitter voltages.

9 comments on “The Captain’s Log

  1. antedeluvian
    May 9, 2014


    The idea of using external hardware to introduce logartihmic processing combined with the internal op-amps of the PSoC is going to go straight into my idea toolbox. I already have an idea of where it can help.

    Not only does it increase dynamic range but  may allow some mathematical pre-processing that could free up processing time. I need to reflect on this some more.

  2. SunitaT
    May 10, 2014

    Thanks Kendall for such a nice article. It is very true that old techniques are still helpful and are not supposed to be totally abandoned. Things are speedily changing in the tech world, but it is good for engineers to look back sometimes and come up with ideas of making old techniques and tools useful.

  3. etnapowers
    May 14, 2014

    “Not only does it increase dynamic range but may allow some mathematical pre-processing that could free up processing time.”


    Nice post, what's the kind of application that you're considering? Could you tell us more about your idea?

  4. antedeluvian
    May 14, 2014


    what's the kind of application that you're considering? Could you tell us more about your idea?

    It's a little undefined at the moment, but if you go to my blog on Curve Fitting with Excel, in it I refer to a design idea by Robert Villanucci where he realizes the linearization using a mutliplier chip. This chip is quite expensive and I thought that instead of multiplying, you could take the log of the signal and add, which is a fairly normal linear thing to do I think. However if you now implement it using the logarithmic techniques described in this blog on op-amps within the PSOC (and save on processing)- well maybe there is room for some creativity there.



  5. eafpres
    May 14, 2014

    Hi Kendall–I'm curious if the end goal was the send exposure info to the camera in real time to adjust the shutter speed or some other parameter, or if it was simply to provide the exposure data in order that a professional photographer could jot this down (as they do) and reduce the number of widgets she/he has to manage by one (i.e. a combined flash + exposure meter vs. two separate tools)?

  6. kendallcp
    May 14, 2014

    The original goal was to replace an ageing ASIC in a range of flash units.  This kind of work has to be done at arms length because the camera makers and ODMs are quite secretive on processes and protocols.  So there's a lot of 'imagineering' needed to produce something that might work, especially when you've already decided what family of programmable devices you're going to build it on.

    Afterwards I realized that this might be just as useful as a general purpose flash meter – something that's readily available, of course, but which could perhaps be implemented more economically and/or elegantly with with a highly-integrated solution based round one of our chips.

    Combining a log amp with a photodetector has some other interesting applications – fluorescence detection being a growing area.

  7. eafpres
    May 14, 2014

    Thanks, Kendall.  Interesting point on fluorescence.  I would guess that means bio-chemical applications including disease study and testing of drugs.  Nowadays those bio-folks are good at splicing in genes to make all kinds of biololecules fluoresce.

    Have you also looked at hyperspectral imaging?  That seems to be another very hot area, everything from remote monitoring of crops to checking processed meat for pathogens before shipping.  I would expect hyperspectral applications to benefit from similar techniques to increase dynamic range.

  8. RedDerek
    May 16, 2014

    @SunitaT0 and @antedeluvian

    Designers tend to get caught up with “what is in the chip is what one has to work with”. As Kendall shows, a true designer uses the chip and then add's functionality to meet requirements. Nice work Kendall.

  9. etnapowers
    June 11, 2014
    @antedeluvian, That's a very interesting idea:
    “The circuit in Figure 1 —which includes optional circuitry—exhibits a worst-case measurement error at 250°C and 2.504V of 0.16%, or 0.4°C”
    If your requirements are low temperature measurements the precision might be really good and this solution seems really easy to implement.
    Please keep up informed, maybe you could write a blog on this, couldn't you?

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.