Do the Harlem Shake

Parodies of the popular song “Do the Harlem Shake” tend to feature people just dancing a little bit until they hear a deep-voiced artist say, “Do the Harlem Shake.” Then they go wild. What if you wanted your robot to start dancing madly? Wouldn't it be nice to have a circuit that could detect and active your animatronic friend at the appropriate point in the song?

I used ViaDesigner to design and simulate the circuit to do just that. This simulator can be used with analog, digital, or mixed signal circuits. In the video (below), a high-level mixed signal circuit is designed and simulated. The circuit successfully indicates when the “Do the Harlem Shake” phrase is first spoken in the song.

ViaDesigner is very convenient to use. It contains the schematic capture function (an obvious starting point). It has a simulator with the features you will likely need, such as active filter simulation. In this case, I used the filter wizard to create a bandpass filter. I also created the comparator and the D-flip-flop.

The circuit simulation accepts signals from the real world. In this case, the circuit “plays” the song from a file-based voltage source that generates a “music” waveorm from raw wavefile (or .wav file) values. This music waveform contains all the frequency components of the song. To isolate the low-frequency energy, the “music” signal is sent through a 10th order continuous time bandpass filter. The filter attenuates frequencies outside of the 100 to 200 Hz range. The output of this filter shows the presence of low-frequency content in the song. Since the “Do the Harlem Shake” voice is so low (100 to 200 Hz) it will have higher output voltages when he starts talking.

The output of the bandpass filter is then applied to a voltage comparator with a fixed reference voltage. When the output of the filter exceeds this threshold voltage, the comparator output goes high and clocks a D-flip-flop, asserting the DO_THE_HARLEM_SHAKE signal. Note that there is other additional low-frequency content in the song, but the deep voice is the first instance of such low-frequency content. The DO_THE_HARLEM_SHAKE signal is what you use to drive your robot's electronics.

If you embed external videos (e.g., from YouTube), center them and give them a height of 288 and a width of 480.

Some points to highlight:

  • The filter was created with a ViaDesigner filter wizard;
  • The comparator and flop were made from ViaDesigner wizards as well;
  • Pretty cool little simulation because it contains eight seconds of integer audio being processed by a 10th order bandpass filter combined with a comparator and digital circuit;
  • And, it doesn't take a several days to complete.

Yes, this is a horribly naive circuit: Audio voltages are assumed to be well behaved, fixed slicing threshold is used, and the circuit cannot distinguish between other low-frequency signals.

So, you think you can design a better Harlem Shake detector circuit? Sure, you could do better, so download ViaDesigner and try. If you need any help getting the waveform generator working let me know and I'll show you how to generate the waveforms. Share your “Do The Harlem Shake” detector circuit with us.

5 comments on “Do the Harlem Shake

  1. WKetel
    March 21, 2013

    Sort of like that animated fish of a few years back, which used embedded cmmands to initiate the various functions. That would be the easiest way. Otherwise it could be done with a speech-to-text program and then a simple basic routine to detect a word pattern. That would be followed by a “print” command, where the initiating a printer command would start the mayham mode. That is a way to do it using all standard stuff running on a standard PC.

  2. Brad Albing
    March 24, 2013

    I think you'd have a tough time extracting usable info from the lyrics of most pop songs – the lyrics are too often too buried in the mix or are intentionally made tough to discern. Imagine hearing “Parachute Women” by the Stones. The only thing we know for sure is that part of the lyrics are “Parachute women…” something-something-something. So, tough to do speech to text.

  3. WKetel
    March 24, 2013

    Speech recognition would indeed have quite a challenge, but probably a patter correlation program could do it easily.

    In years past I would get the lyrics by making a copy on reel-to-reel tape, and then ai could manually run any shrt segment as fast or slow as I wanted, as many times as I needed to get all of the words right. Two hands turning the reels, it looked funny but it worked well. There might be a good enough algorithm to do it now bt I would not bet on it. Although there is that program that recognizes songs played on the radio. OR, does it grab the digital identifier transmitted with the music?

  4. Brad Albing
    March 24, 2013

    I believe it can identify the root notes in chord sequences and then match that to info in its library to figure out the melody and then the song.

  5. marc22
    August 14, 2014

    It's amazing what can be done using a bit of programing, can you make the robot react to the Alembic bass guitar as well? Surprising the friends that know how to play the instrument with a dancing robot would be really fun.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.