Despite his training, the typical engineer doesn't seem to like too much math. Using established formulas is one thing, but deriving them to solve a problem in a relatively unfamiliar field is another.
But if your approach is right, you can learn to see some shortcuts in these situations, apart from the Internet, which is around today to give us instant (but not necessarily right) answers. It's a lesson I learned from the late Mike Riezenman, the most talented of the trade journal magazine engineer/editors I personally knew. He gave me what I now call the Gift of MacGyver, named after the character from the TV series that showed us how to do the most with the least.
Here's how to apply it: Visualize the problem, and then find the most direct and appropriate math possible to solve it. There's irony here, because that doesn't necessarily mean to proceed from fundamental principles to reduce a problem to its bare essentials. In many cases, it means to proceed from the simpler expressions that derive from those basic laws.
If you're good at imagining what's happening in a circuit from an electron's or electromagnetic wave's point of view, or if you are very observant, you're well on your way. Your intuition may not always be spot on, but you can often come upon a good working equation. Sometimes you may run counter to the experts, who deem your hypothesis as too simplistic. I'd counter that notion by saying what you're really doing is looking at things in a different way.
My first stab at this almost 40 years ago involved a classic electromagnetics and antennas problem, probably the toughest analog area for the typical electrical engineer. I simply wanted to know the maximum signal voltage received at the terminals of a (resonant) half-wave dipole in free space from an RF transmitter some distance away. But I wasn't really immersed in (and didn't want anything to do with) Poynting vectors and the various electromagnetic concepts presented in terms of the vector calculus that would just muddy the waters for me.
I wanted to avoid the kind of article I today see on the Net — some very scholarly pieces that clearly offer good insight but are tedious and guaranteed to lose most of the engineering audience in the first two seconds. I wanted an answer derived from basic common sense, if that was possible.
Cutting to the chase
“What you want to do is just envision a wave approaching a wire and inducing a voltage in it, and forget about everything else,” Mike said. “One basic equation. Do that, and you'll get an answer pretty close to what the experts say.” He was right. But hadn't this been done long ago?
Maybe, but in the end, I wrote and easily solved just one simple equation in 10 minutes. It skirted all extraneous questions about how much of the wave front would actually cross the wire in passing by, what the specific current flow on that wire might be, or how and where the charges on the wire migrate during a given cycle. I also avoided any rigorous proofs regarding all those fancy terms from Maxwell's equations; none if it was really necessary.
I made one initial assumption: The wave-to-dipole configuration was in total symmetry, so the induced voltage I was calculating would likely be what I would hypothetically measure at the dipole's center, or perhaps half of that voltage. (The former assumption turned out to be correct, though I have yet to inspect the formal proof.) Without really knowing what had gone before, I wrote a simple line integral for the work needed to move a charge a given distance, and I kind of turned it sideways to express the voltage, e , generated by a field cutting across a wire oriented for maximum reception — the dot product of the electric field and the prescribed path. Thus:
where E is the peak amplitude of the wave's electric field in volts/meter, and the wire length is l. It was critical to recognize that the electric field, which is parallel to the wire, oscillates along the wire, so the value of the cosine term is a function of antenna length. So it was necessary to see that ωt = (2π/λ)l , and thus for a half-wave dipole:
This expression yields:
It was as simple as that, and the value is what you'll see quoted most often today. I didn't need anything that I subsequently found out about the antenna's current distribution on a transmitting antenna, as had been laid out long before in Pocklington's equation — though some derivations of received signal incorporate it today, thus implying my result had to be consistent with that equation. In that context, I had incorporated the cosine function in the electric field component of my wave, and thus any resulting currents would be of the same form and would most likely be proportional to induced voltage at every point along the wire, as I'd expect in a linear circuit.
Overall, I happened upon a good answer without the need for a textbook filled with a lot of tough introductory math that might have helped me understand why I could write a given equation but would also bury me with concepts that were too time-consuming (and not particularly necessary) for me to master.
Mike pointed out one flaw with the initial integral. If I had defined instantaneous field strength using a sine versus cosine function, I might have run into symmetry/boundary issues, as, for example, is often the case when using Fourier analysis to resolve a complex waveform. Curiously, I chose cosine simply because that's how I viewed the standing wave of voltage on a half-wave dipole.
Indeed, I would have had a null result in evaluating the integral using the sine function (terms would have canceled out instead of adding). For such reasons, many authors today often approach these types of problem algebraically. For instance, the induced voltage is proportional to the product of field strength and the antenna's effective length (related to effective aperture or area). Then they calculate that length to be about 0.32 of a wavelength for a half-wave dipole, so that:
As I mentioned, others often take up on Pocklington and P.S. Carter's (1932) work by incorporating the average value of resulting current (0.636 of peak) flowing in the antenna, which is doubled when evaluating a more involved integral. They come out with something like e = 1.27 Eλ/4 = Eλ/π . In any case, there's no substitute for performing a measurement in the field if you can to verify your findings. It's not unknown for engineers to overlook some basic issues in their analyses, as I'll mention later.
Moving down the line
Now we move on to the next problem, whose answer is well established. Say you can't remember the equation for the impedance (Zo) of a lossless coaxial transmission line. All you can vaguely remember is an involved proof from a college text. But your job is done with one equation if you are MacGyver, and you remember the transmission line model as basically a continuous chain of series inductances and parallel capacitances from beginning to end. That energy is transferred from section to section as an electromagnetic wave propagates down the line toward its matched load. So we can simply apply the basic energy storage equations we learned in fundamental AC theory. Thus, the energy (W) stored by and transferred to each succeeding L and C section is:
In part 2, I'll show an example of a fairly big payoff from using intuition and MacGyver's techniques — a relatively simple, specific answer to an antenna problem that apparently went formally unsolved for at least 55 years until 2010.