I've done my fair share of power designs, both switching power supplies and H-bridge motor controllers. Back in the 1970s and 1980s, my designs were done with NPN bipolar power transistors. We rarely used PNP devices because they typically cost more and were not available in current ratings as high as their NPN counterparts (all else being equal).
We also shied away from Darlington transistors. They have high to very high gain, but their saturation voltage is also high, significantly increasing the power dissipated within the device.
Since the collector of the input transistor connects to the collector of the output transistor, once the output transistor starts to turn on, it robs drive current (or deprives a voltage source if you'd prefer) from the input transistor. The result is a saturation of the compound device of around 1 volt. At high collector currents, the power dissipated in the device makes it run pretty warm.
From the 1980s to the present, the transistor of choice was generally the N-channel MOSFET. As with the bipolar devices, the P-channel FETs were not available in power ratings as high, and were more expensive. With FETs, the extremely high input impedance makes driving the gate somewhat easier. Gate-to-source capacitance negates that advantage somewhat, especially at high switching frequencies.
I never used insulated gate bipolar transistors (IGBTs), partially because I never completely understood them. I initially thought of them as Darlington devices with an N-channel FET substituted for the input bipolar transistor. That would produce a device with extremely high input impedance and high overall gain, but of course there would still be the high saturation voltage and corresponding high power dissipation.
I am seeing more press releases recently that talk about IGBT devices, so I decided to take a closer look at what the devices really are. A quick look at Wikipedia shows that I was only partially correct regarding my assumptions of the inner workings.
Aha! It does in fact use an N-channel FET as the input device, but the bipolar device is a PNP device. Now it becomes much more efficient, and the device could have a very high breakdown voltage capability. Turn on the FET with just a few volts, and you turn on the PNP transistor hard. There is that parasitic NPN transistor; combined with the PNP it makes the bipolar section look like an SCR. In fact, early IGBT devices suffered from latch-up: Sometimes, once you turned them on, you couldn't turn them off, unless you cut collector current flow (shut off the main power supply). That problem has been cured with modern devices.
By the way, you'll see different symbols for the IGBT; this one is semi-common:
Note that the upper terminal is called the collector, but it connects to the PNP emitter. That's just to simplify everyone's understanding of how it's used, rather than what's going on inside.
These devices are not the solution for all applications. They have a lower forward voltage drop than a regular MOSFET as long as you compare them with comparable high-voltage, high-current devices (up into kilovolts and hundreds-of-amps range). At more moderate current levels, regular FETs are better. And if you need high switching speeds for PWM rates — into the hundreds of kHz or MHz range — again use conventional FETs.
Let me know if you've used these devices and how well they worked in your application.