• twitter icon
  • facebook icon
  • vimeo icon
  • youtube icon
  • rss icon

Recreating the THX Deep Note

If you’ve ever watched a movie in a movie theater, chances are that you are familiar with the Deep Note, the audio logo of THX. That sound is one of the first sounds we hear at the beginning of movie trailers in a THX-certified venue. I’ve always been fascinated with that great distinctive crescendo, starting from an eerie cluster of tones and ending with a full range bright and grand finale. What an ear treat!

Yesterday, (probably) out of nowhere, the origins of that sound tickled my curiosity and I did some little research. I’m deeply moved by the history behind it, and I want to share what I’ve learned with you. Then we will move on to create that sound ourselves, get your scissors ready, and some glue!

The best source of information I could find about the sound, which I think is a complete electro-acoustic composition, is from the great Music Thing blog. It is from a blog post from 2005. The link to the post is here.

So here is some trivia:

  • It was made by Dr. James Andy Moorer in 1982.
  • At one point in history, it was being played 4000 times a day, almost every 20 seconds!
  • A quote from Mr. Moorer: “I like to say that the THX sound is the most widely-recognized piece of computer-generated music in the world. This may or may not be true, but it sounds cool!”
  • It was generated in a mainframe computer called ASP (Audio Signal Processor) which was able to synthesize the sound in real-time.
  • It took 20000 lines of C code to write a program that generated the score for the ASP to play. The generated score was consisting of 250000 lines of statements which were to be obeyed by the ASP.
  • The oscillators used as the voices are using a digitized cello tone as a waveform. Mr. Moorer recalls the waveform as having something around 12 audible harmonics.
    The ASP was able to run 30 oscillators of this sort in real-time. (In comparison, the notebook computer I happen to be using right now can handle more than 1000 of them without a glitch).
  • The sound itself is copyrighted, but here is a problem: The code Mr. Moorer wrote has generative characteristics (i.e. it relies on random number generators), so each time you generate a score and feed the resulting statements to the ASP, the generated sound is somewhat different. So I don’t really think that it is safe to say that the process itself is or can be “copyrighted”. The sound itself, yeah that one is covered.
  • It debuted in the THX trailer of the Return of the Jedi before its premiere in 1983.
  • The generative characteristics of the process became troublesome at some point. After the release of the piece with “Return of the Jedi”, the original recording of the Deep Note was lost. Mr. Moorer recreated the piece for the company but they kept complaining that it wasn’t sounding the same as the original one. Eventually, the original recording was found and kept in a safer place from then.
  • Dr. Dre asked for permission to sample the sound for his music but was turned down. He used it anyway and got sued.
  • Metastasis which is an Iannis Xenakis composition (1954) has a very similar opening crescendo (among other works by various composers). It starts with a single tone instead, and lands on a semi dissonant tone cluster instead of a consonant one as in Deep Note.

The sound recording used for the patent application can be listened from here:

http://www.uspto.gov/go/kids/soundex/74309951.mp3

Be sure to listen the sound because we will be referring to that particular recording when we have a go at recreating the Deep Note. You may also listen other instances of this piece: http://www.thx.com/cinema/trailers.html

Here is some technical/theoretical trivia before we start synthesizing:

  • This is my observation: The original recording at United States Patent and Trademark Office website has a fundamental pitch which just stands between D and Eb, but the newer renditions on more recent features has a fundamental between E and F. I’ll use the original D/Eb fundamental in my recreation attempt. The newer stuff is usually shorter if I’m not mistaken. It’s clearly obvious that I like the one submitted to the patent office better.
  • According to Mr. Moore (and also confirmed by my ears) the piece starts with oscillators tuned to random frequencies between 200Hz and 400Hz. But the oscillators are not simply buzzing their sounds, their frequencies are modulated randomly, and they use smoothers to smooth out the random pitch transitions. This goes on until the crescendo is initiated later in the piece.
  • Inside the crescendo and in the final landing sound, the randomizers are still modulating the frequencies of the oscillators, so no oscillator is stable at any given moment. But the random sweep range is narrow so it merely adds an organic / chorusy feel to the sound sources.
  • Mr. Moorer recalls the digitized cello sound had around 12 audible harmonics in its spectrum.
  • To my knowledge, the written score (which was used to get the copyright) was never released, but Mr. Moorer says he can supply the score if we can get permission from THX, though I think that is not really necessary for an attempt at recreating the piece.
  • The final landing sound (technically not a chord) is just stacked up octaves of the fundamental, to my ears. So when recreating the piece, we will start with randomly tuned (between 200Hz and 400Hz) oscillators, make a semi-sophisticated sweep and land at stacked up octaves on a fundamental that sits between low D/Eb.

So let’s get going. SuperCollider is my tool of choice here. I start with a simple waveform. I want to use a sawtooth wave as the oscillator source, it has a rich and harmonic spectrum consisting of even and odd partials. I’ll want to filter the upper partials later on. Here is some beginning code:

//30 oscillators together, distributed across the stereo field ( { var numVoices = 30; //generating initial random fundamentals: var fundamentals = {rrand(200.0, 400.0)}!numVoices; Mix ({|numTone| var freq = fundamentals[numTone]; Pan2.ar ( Saw.ar(freq), rrand(-0.5, 0.5), //stereo placement of voices numVoices.reciprocal //scale the amplitude of each voice ) }!numVoices); }.play; )

I chose to have 30 oscillators for sound generation, congruent with the capabilities of the ASP computer as reported by Mr. Moorer. I’ve created an array of 30 random frequencies between 200Hz and 400Hz, distributed them randomly across the stereo field with Pan2.ar and with the argument rrand(-0.5, 0.5), assigned the freqs to the sawtooth oscillators (30 instances). Here is how it sounds:

Error: Could not load Audio Player.

Now if we examine the info provided by Mr. Moorer, and/or listen closely to the original piece, we can hear that the pitches of the oscillators drift up and down randomly. We want to add this for a more organic feel. The frequency scale is logarithmic, so lower frequencies should have narrower wobbling ranges than higher frequencies. We can implement it by sorting our randomly generated frequency values, and assigning LFNoise2 (which generates quadratically interpolated random values) mul arguments in order inside our Mix macro. And I also added a lowpass filter for the oscillators whose cutoff frequencies are 5 * freq of oscilator with moderate 1/q:

//adding random wobbling to freqs, sorting randoms, lowpassing ( { var numVoices = 30; //sorting to get high freqs at top var fundamentals = ({rrand(200.0, 400.0)}!numVoices).sort; Mix ({|numTone| //fundamentals are sorted, so higher frequencies drift more. var freq = fundamentals[numTone] + LFNoise2.kr(0.5, 3 * (numTone + 1)); Pan2.ar ( BLowPass.ar(Saw.ar(freq), freq * 5, 0.5), rrand(-0.5, 0.5), numVoices.reciprocal ) }!numVoices); }.play; )

Here is how it sounds with the latest tweaks:

Error: Could not load Audio Player.

This sounds like a good starting point, so let’s start implementing our sweep, initially in a very crude way. To implement the sweep, we first need to define our final landing pitches for each of the oscillator. This is not very straightforward, but not very hard either. The fundamental tone should be the pitch that is right in between low D and Eb, so the midi pitch for that tone would be 14.5 (0 is C, count up chromatically, I’m skipping the first octave). So we need to map our freq arguments for 30 oscillators from random frequencies between 200Hz and 400Hz to 14.5 and to its octaves. By ear, I’ve chosen to use the first 6 octaves. So our final array of destination frequencies will be:

(numVoices.collect({|nv| (nv/(numVoices/6)).round * 12; }) + 14.5).midicps;

We’ll be using a sweep that goes from 0 to 1. The random frequencies will be multiplied by (1 – sweep), and the destination frequencies will be multiplied by sweep itself. So when sweep is 0 (beginning) freq will be the random one, when it is 0.5, it will be ((random + destination) / 2), and when it is 1, the freq will be our destination value. Here is our modified code:

//creating the initial sweep (crude), creating final pitches ( { var numVoices = 30; var fundamentals = ({rrand(200.0, 400.0)}!numVoices).sort; var finalPitches = (numVoices.collect({|nv| (nv/(numVoices/6)).round * 12; }) + 14.5).midicps; var sweepEnv = EnvGen.kr(Env([0, 1], [13])); Mix ({|numTone| var initRandomFreq = fundamentals[numTone] + LFNoise2.kr(0.5, 3 * (numTone + 1)); var destinationFreq = finalPitches[numTone]; var freq = ((1 - sweepEnv) * initRandomFreq) + (sweepEnv * destinationFreq); Pan2.ar ( BLowPass.ar(Saw.ar(freq), freq * 5, 0.5), rrand(-0.5, 0.5), numVoices.reciprocal //scale the amplitude of each voice ) }!numVoices); }.play; )

Here is the sound:

Error: Could not load Audio Player.

As I said earlier, this is a very crude sweep. It goes linearly from 0 to 1, which is not congruent with the original composition. Also you should have noticed that the final octaves sounds AWFUL because they are tuned to perfect octaves, and fuse into each other, having fundamental-overtone relationships between them. We will fix this by adding random wobbling to the final pitches, just as we did with the initial random pitches, and it wil sound much much more organic.

So we should fix the frequency sweep envelope first. The earlier envelope was just for trying the formulas (and the final landing) out. If we observe the original piece, we can see that there is very little change in organization for the first 5-6 seconds. After that there is a fast and exponential sweep that lands the oscillators to the final octave spaced destinations. Here is the envelope I’ve chosen:

sweepEnv = EnvGen.kr(Env([0, 0.1, 1], [5, 8], [2, 5]));

It takes 5 seconds to go from 0 to 0.1, and 8 seconds to go from 0.1 to 1. The curvatures for the segments are 2 and 5. We’ll see how that worked out, but we also need to fix the final sound spacings. Just as we did with the random frequencies, we will add random wobbles with LFNoise2 whose range will be proportional to the final frequency of the oscillator. This will make the finale sound much more organic. Here is the modified code:

//tweaking the envelope, detuning the final chord ( { var numVoices = 30; var fundamentals = ({rrand(200.0, 400.0)}!numVoices).sort; var finalPitches = (numVoices.collect({|nv| (nv/(numVoices/6)).round * 12; }) + 14.5).midicps; var sweepEnv = EnvGen.kr(Env([0, 0.1, 1], [5, 8], [2, 5])); Mix ({|numTone| var initRandomFreq = fundamentals[numTone] + LFNoise2.kr(0.5, 3 * (numTone + 1)); var destinationFreq = finalPitches[numTone] + LFNoise2.kr(0.1, (numTone / 4)); var freq = ((1 - sweepEnv) * initRandomFreq) + (sweepEnv * destinationFreq); Pan2.ar ( BLowPass.ar(Saw.ar(freq), freq * 8, 0.5), rrand(-0.5, 0.5), numVoices.reciprocal ) }!numVoices); }.play; )

Here, I’ve also tweaked the cutoff frequency of the lowpass filter to my liking. I like tweaking stuff, until it alienates me from what I’ve been working on… Anyway. Here is the resulting sound:

Error: Could not load Audio Player.

I’m not really happy with this envelope either. It needs a longer initialization and faster finish. Or wait… Do I have to have the same envelope for every oscillator? Absolutely not! Each oscillator should have its own envelope with slightly different time and curve values, and I bet it will be more interesting. And the high frequency overtones of the random sawtooth cluster is a bit annoying, so I’m adding a lowpass to the sum, whose cutoff is controlled by a global “outer” envelope that has nothing to do with the envelopes of the oscillators. Here is the modified code:

//custom envelopes. lowpass at end ( { var numVoices = 30; var fundamentals = ({rrand(200.0, 400.0)}!numVoices).sort; var finalPitches = (numVoices.collect({|nv| (nv/(numVoices/6)).round * 12; }) + 14.5).midicps; var outerEnv = EnvGen.kr(Env([0, 0.1, 1], [8, 4], [2, 4])); var snd = Mix ({|numTone| var initRandomFreq = fundamentals[numTone] + LFNoise2.kr(0.5, 3 * (numTone + 1)); var destinationFreq = finalPitches[numTone] + LFNoise2.kr(0.1, (numTone / 4)); var sweepEnv = EnvGen.kr( Env([0, rrand(0.1, 0.2), 1], [rrand(5.0, 6), rrand(8.0, 9)], [rrand(2.0, 3.0), rrand(4.0, 5.0)])); var freq = ((1 - sweepEnv) * initRandomFreq) + (sweepEnv * destinationFreq); Pan2.ar ( BLowPass.ar(Saw.ar(freq), freq * 8, 0.5), rrand(-0.5, 0.5), numVoices.reciprocal ) }!numVoices); BLowPass.ar(snd, 2000 + (outerEnv * 18000), 0.5); }.play; )

The slightly out of phase envelopes rendered the sweep slightly more interesting. Lowpass at 2000Hz at the beginning helps to tame the initial cluster. Here is what it sounds like:

Error: Could not load Audio Player.

I have one more thing that will make the process sound more interesting. You remember we’ve sorted the random oscillators at the beginning right? Well we can now reverse-sort them and make sure oscillators running in higher random frequencies will end up in bottom voices after the crescendo and vice versa. This will add more “movement” to the crescendo and is quite congruent with the way the original piece is structured. I’m not sure if Mr. Moorer programmed it specifically in this way, but at least, the chosen recording demonstrates this process and it sounds cool, be it a random product of the generative process itself, or a compositional choice (oh, did I say that? If the process covers it, it IS a choice… or is it?). So I’ll reverse the sorted values and the way we structured our code will make sure that the higher pitched sawtooths will end up in the lower voices in the finale and vice versa.

Another thing: We need a louder bass. In the way it is now, all voices have equal amplitude. I want to have the lower voices to have slightly higher amplitude and decay proportionally as the frequency goes up. So I’ll change the mul argument of Pan2 to take this into account. I’ll re-tweak the cutoff frequencies of the lowpass filters governing the individual oscillators. And I am going to add a global amplitude scaling envelope that will fade the piece in, and fade out when the piece ends, and free the synth from scserver. Also some more numeric tweaks here and there, here is our final code:

//inverting init sort, louder bass, final volume envelope, some little tweaks ( { var numVoices = 30; var fundamentals = ({rrand(200.0, 400.0)}!numVoices).sort.reverse; var finalPitches = (numVoices.collect({|nv| (nv/(numVoices/6)).round * 12; }) + 14.5).midicps; var outerEnv = EnvGen.kr(Env([0, 0.1, 1], [8, 4], [2, 4])); var ampEnvelope = EnvGen.kr(Env([0, 1, 1, 0], [3, 21, 3], [2, 0, -4]), doneAction: 2); var snd = Mix ({|numTone| var initRandomFreq = fundamentals[numTone] + LFNoise2.kr(0.5, 6 * (numVoices - (numTone + 1))); var destinationFreq = finalPitches[numTone] + LFNoise2.kr(0.1, (numTone / 3)); var sweepEnv = EnvGen.kr( Env([0, rrand(0.1, 0.2), 1], [rrand(5.5, 6), rrand(8.5, 9)], [rrand(2.0, 3.0), rrand(4.0, 5.0)])); var freq = ((1 - sweepEnv) * initRandomFreq) + (sweepEnv * destinationFreq); Pan2.ar ( BLowPass.ar(Saw.ar(freq), freq * 6, 0.6), rrand(-0.5, 0.5), (1 - (1/(numTone + 1))) * 1.5 ) / numVoices }!numVoices); Limiter.ar(BLowPass.ar(snd, 2000 + (outerEnv * 18000), 0.5, (2 + outerEnv) * ampEnvelope)); }.play; )

And here is the final recording of the piece:

Error: Could not load Audio Player.

You may want to compare it with this original one:

http://www.uspto.gov/go/kids/soundex/74309951.mp3

So this is my rendition. Of course it can be further tweaked to death, envelopes, frequencies, distribution, everything… Nevertheless, I think mine is a decent attempt at keeping the legacy alive. And I’d love to hear your comments and/or hear your shots at interpreting this piece.

——————–

Oh and here is one more thing I did for fun. You know, I told you about how it took 20000 lines of C code to generate the original piece. I’m pretty sure Mr. Moorer had to create almost everything by hand so that is not very awkward. But you know, we’ve been sctwitting for some time, trying to fill stuff into 140 characters of code. So for the fun of it, I tried to replicate the essential elements of the composition in 140 characters of code. I think it still sounds cool, here is the code (this one uses an F/E fundamental):

play{Mix({|k|k=k+1/2;2/k*Mix({|i|i=i+1;Blip.ar(i*XLine.kr(rand(2e2,4e2),87+LFNoise2.kr(2)*k,15),2,1/(i/a=XLine.kr(0.3,1,9))/9)}!9)}!40)!2*a}

And here is the sound this version generates:

Error: Could not load Audio Player.

All the code in this page is in this document for you to experiment: - get from here –

Happy sweeping…

Comment

    Beautifull coding and sound design session.I'm new to supercollider and your examples and sounds decide me to study more deeply supercollider. Really awesome stuff. Thank you very much . s.boussuge

    stephane boussuge · Jul 25, 08:59 PM · #

    Great little intro to SC! Thanks.

    — Dan · Jul 26, 01:33 AM · #

    Excellent post! Here is the same piece of music coded in ChucK. https://lists.cs.princeton.edu/pipermail/chuck-users/2009-April/004133.html ahmet

    — ahmetkizilay · Jul 26, 08:17 PM · #

    Ahmet, thanks for the link of the ChucK version. I quickly grabbed miniAudicle and gave it a shot. Happy to lurk through another implementation/interpretation of the piece!

    Batuhan · Jul 27, 04:44 AM · #

    You can create a decent facsimile of this sound (by which I mean most people will recognize it, if they're familiar with the THX sound) with an electric guitar with a floyd rose bridge and a Digitech 2101 processor. There's a certain preset on the digitech 2101 that sounds kind of like a pipe organ -- kind of an envelope filter that kills the attack plus a harmonizer, or something along those lines. Setup that preset, then just fret a chord, an E chord, or maybe a G chord, drop the bar way way down into string flapping territory, and hit it, then slowly bring the bar up to resolve the cacaphony to whatever chord you've fretted. Sounds a lot like the THX sound.

    SteveC · Jul 27, 01:41 PM · #

    Considering that THX stupidly filed a copyright claim to pull an a capella version of the sound off of YouTube, it's a fair bet those tools couldn't care less about fair use or your right to recreate a ubiquitous 20-second sound and post the results. Good luck.

    Clumpy · Jul 27, 06:27 PM · #

    This is a great teaching resource and absolutely fascinating article. Thanks for sharing!

    Jamie Bullock · Jul 27, 07:19 PM · #

    In the original, I can clearly hear at least one tone that *drops* in frequency just before the final note. That's a large part of the coolness. I like your version too though.

    — oblivion95 · Jul 27, 08:09 PM · #

    THX actually has a *TRADEMARK* on this sound, which means that its use in any setting within which it could be interpreted as indicating their brand, such use is illegal. ("protected") This is likely why LucasFilm got uppity about the YouTube version. And presumably you've got a solid case for keeping this up if they come after you. Just don't try to use it in any videos, especially not at the beginning. Your code is really good work, and thanks for sharing.

    — Greg · Jul 27, 08:27 PM · #

    A-a-a-a-a-a-a-a-awesome

    August Lilleaas · Jul 27, 11:25 PM · #

    Being a self-proclaimed music nut who programs for a living, I must admit I had never even thought of how they generated that sound. Definitely and interesting read!

    — Lawrence · Jul 28, 04:11 AM · #

    How hard is it to reverse engineer the clip by passing it throught a spectrum analyzer and figuring out what the oscillators are doing over time?

    — Wing · Jul 28, 12:12 PM · #

    This post wins!

    Acrylic Style · Jul 28, 02:22 PM · #

    Nice story. To manipulate lots of oscillators the "Xenakis way", HighC is a nice application, inspired from Xenakis' UPIC. http://highc.org I did a simple version in HighC (took me 5 minutes) check the sample THX at: http://highc.org/demo.html (run the applet and choose the 3rd sample).

    Thomas Baudel · Jul 28, 06:28 PM · #

    @Thomas, The applet spits an error (java.lang.NoClassDefFoundError) but I could listen to it from here: http://highc.org/samples/demo.html I haven't heard about HighC before, but I'm going to check that out, I'm always interested in composing interfaces. The demo you had demonstrates the idea but has its own problems I think. Primarily it resolves to "perfect octaves" which perfectly fuse into each other and it sounds awkward (like a single tone, since the frequencies have integer multiple relationships). No random drifting in oscillators also make things sound dull a bit. I'm not telling mine is awesome here, but I'm interested to see if it is easy to do these with HighC (I mean procedural alteration of drawn lines etc.). @Greg, thanks for the Trademark clarification. I'm still not clear on the legal part of this though. @Wing, yes that is easy, in fact, the Wikipedia article I linked for Deep Note has pictures of spectrum analysis for the sound (made with Baudline).

    Batuhan · Jul 28, 06:49 PM · #

    @Batuhan: thanks for fixing the link. As for the demo, I did this very fast, and I'm sure lots of improvements are easy to make. Modulations are easy to perform by drawing a modulating curve and either "Effects>Modulate in Frequency" or Effects>Modulate in Amplitude". Also, there are a bunch of narrow band noise waveforms to introduce some more randomness in the drifts. I just don't have much time to make it much better.

    Thomas Baudel · Jul 28, 08:02 PM · #

    Thanks for the clarification Thomas, that kind of flexibility is interesting. I'll check it out. Best

    Batuhan · Jul 28, 08:09 PM · #

    Thanks for the trip down memory lane, and congratulations for a job well done. I really wish I could share the details with everyone. Maybe someday! Let 1024 blossoms bloom . . . Andy Moorer

    James A. Moorer · Jul 29, 09:38 AM · #

    Wow... That was unexpected. :) Thanks for the comment, Mr. Moorer, I'm flattered. Thanks to "you" for the inspiration.

    Batuhan · Jul 29, 09:43 AM · #

    Is that the same Andy Moorer who composed "Lions Are Growing?" One of my favorite electronic pieces from the early days. I had no idea he did the THX sound, so I'm glad to hear that his talent was so well recognized.

    — Captain Mikee · Aug 14, 05:54 AM · #

    _ex_ it should be on the main distro, I'm using a few weeks old svn version. It's in the BEQSuite of UGens.

    Batuhan · Aug 16, 03:02 PM · #

    I just found your site through this amazing post (and I hope you don't mind that I've linked to it on my blog here: http://colorokay.blogspot.com/ I can't wait to read more!

    Benjamin Albertus · Sep 24, 07:57 AM · #

    Delightful, perfect, awesome.

    f.e · Dec 12, 10:39 PM · #

    Good article! Thanks!

    SMiGL · Jan 6, 06:12 AM · #

    very good tutorial and explanation thank you very much batuhan

    baran gulesen · Jan 25, 05:39 PM · #