Earslap
2015-11-15T16:06:08+02:00
http://www.earslap.com/
Batuhan Bozkurt
The Browser Sound Engine Behind Touch Pianist
2015-05-24T17:41:00+03:00
http://www.earslap.com//article/the-browser-sound-engine-behind-touch-pianist
<p>At the beginning of May 2015, I released the fun browser experiment <a href="http://touchpianist.com" target="_blank">Touch Pianist</a>. I received a lot of questions from fellow developers about the tech used to make it tick, so here is my attempt at explaining the meat of the sound engine I created to make it possible.</p>
<p>Touch Pianist is a HTML5 web browser experiment using HTML5 Canvas (optional WebGL thanks to <a href="http://www.pixijs.com" target="_blank">pixi.js</a>) and <a href="https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API" target="_blank">WebAudio</a>, which provides a visualization of all-time popular classical piano pieces, and gives you a very addictive way of performing those pieces using your computer keyboard or a touch screen. It also has iOS and Android versions at the moment with a bigger and growing music library.</p>
<p>These mobile apps also use native WebViews and utilize HTML, Javascript and WebAudio under the hood; except for the audio in Android because even <a href="https://code.google.com/p/android/issues/detail?id=3434" target="_blank">after all those years</a> (link is to the infamous issue no. 3434), audio in Android still sucks quite a bit and Chromium WebAudio latency is still too high for the purposes of this instrument (but still pretty damn impressive; better than my expectations considering the possible layers of buffering involved in the mess that is Linux audio), so I had to rewrite an inferior sound engine in Java to get it to work on Android devices. We’ll get back to that at the end of this document.</p>
<h2 id="the-important-constraints">The Important Constraints</h2>
<p>The website was mainly aimed at casual crowds of music loving people that are not used to waiting more than a few seconds for a game-like experience to (down)load into their web browser. I needed a decent piano in the browser. However a good quality sampled piano library has an uncompressed size of a whopping few <strong>gigabytes</strong>. Disregarding the possible illegality of redistributing a commercial piano sample library product, even if you compress the samples to lossy formats in such a library, it will still amount to a few hundred megabytes.</p>
<p>My aim was to have a convincing sampled piano inside a browser with at most <strong>2 to 3 megabytes</strong>; so that the site would load just like any other javascript and media heavy site.</p>
<p>Before I talk about the compromises that needed to be made to make this a reality, let’s talk about why a good piano sample library requires gigabytes worth of information to begin with. What is the big deal? Why can’t sampled piano creators sample the mere 88 keys of a good piano placed in a good room with good quality microphones, package it and call it a day?</p>
<h2 id="piano-construction-101">Piano Construction 101</h2>
<p>The piano is a relatively new instrument, about 300 years old. But the idea that made the piano possible goes back to hundreds of years earlier than that; the piano is an important evolutionary step within the series of instruments that preceded it. The construction continues to evolve even today, and the sound of a 200 years old piano is significantly different than the modern piano we have today.</p>
<p>The Italian inventor of the piano named it <em>un cimbalo di cipresso di piano e forte</em> (“a keyboard of cypress with soft and loud”) and the name was later abbreviated as pianoforte (soft-loud) then simply as piano. The main selling point of this new instrument was that it allowed the player to comfortably control the <em>loudness</em> of the notes while playing it. It’s like the inventor, <a href="http://en.wikipedia.org/wiki/Bartolomeo_Cristofori" target="_blank">Bartolomeo Cristofori</a>, fulfilled a feature request long sought after by music performers and composers. He had to solve a previously unsolved mechanical problem to make it happen (how do you build a keyboard instrument in which when you press a key, a hammer strikes a string but immediately detaches from it to let the string ring instead of sticking to it and damping it? And do this while allowing the hammer to be actuated in quick succession at the same time?).</p>
<p>The predecessors of the piano had a problem with loudness. The <a href="http://en.wikipedia.org/wiki/Harpsichord" target="_blank">harpsicord</a> was quite loud, but the player had no control over the loudness of individual pitches. The <a href="http://en.wikipedia.org/wiki/Clavichord" target="_blank">clavichord</a>, in contrast, allowed you to control the dynamics of the sound produced but the compromises that were made to make this possible meant that the instrument was too quiet for bigger performances.</p>
<p>The pianoforte’s sound dynamics control was its main selling point. And the evolution of the modern piano saw many of its advancements around that particular <em>feature</em>.</p>
<p>What this means for us is that for our purposes, we should consider each <em>note</em> of a piano as a single instrument. So a piano can be considered as an instrument which contains 88 sub-instruments, one at each key, because the difference of the timbre of a single note when it’s played soft and when it’s played loud is <em>quite</em> different. It’s not just the loudness that changes, the timbre of the whole sound (especially the attack portion) is different. This means, one can’t simply get away by recording a note of a piano at a single loudness level, and by letting the software adjust its volume based on the velocity of the input. It sounds awkward. When you play back a loud piano note at a quieter level, it doesn’t sound like a softly performed piano note, it sounds like a loud piano note played back in a lower volume. Just like how you can’t pass a human scream sound as whispering merely by decreasing its playback volume. This simple volume adjustment method for piano sounds might be passable for some cases (might be the case for a web site), but it isn’t enough if you want to make a professional library aimed for recording musicians.</p>
<p>So companies that produce piano sample libraries record <em>each key</em> of a piano at different loudness levels, and play the appropriate samples for each key based on the velocity input during performance. A single key of a piano can have up to 127 different samples for different loudness levels but in practice most you’ll see is 16 or 32 or 64 for higher end piano sample libraries.</p>
<p>There are lots of other details too; some libraries record <em>multiple</em> samples for a single loudness level and alternate between them to provide a more natural sound (so, say, if you press the middle C twice one after another at a loudness level of 64, the sampler won’t play the same sample the second time it is pressed, this prevents the “machine gun” effect at fast passages). The professional libraries also provide separate samples for the mechanical noises of pianos to be mixed into or out of the final output sound as desired. Combine all of those and you have a few gigabytes worth of sound samples.</p>
<p>The Touch Pianist website is not aimed for recording musicians obviously, but I still wanted a somewhat convincing and more importantly <em>entertaining</em> piano sound so that the pieces using it would sound reasonably good. But I wanted one with a very small size so the download would be very fast and the bandwidth costs wouldn’t bankrupt me if it went viral (which it did).</p>
<p>So I did the math. If I limit myself to one 3-4 seconds long sample per key (I decided against using pitch shifting to reuse single samples for multiple pitches), and use mp3 and / or ogg compression, I figured I can hit my 2-3 megabyte target. Using a single sample per key however, meant that I wouldn’t be doing velocity layering. I <em>could</em> do simple velocity layering by using two samples per key (one for loud and one for soft, doubling the package size), but I wanted this thing to work on mobile, and WebAudio uncompresses the files as raw audio into memory so this wouldn’t work well in the memory constrained environment of mobile devices (my iPad2 wouldn’t handle 88x2 three second mono samples in memory for instance, I tried and received stern memory warnings).</p>
<p>I needed a way to do somewhat convincing velocity layering; a way to change the <em>timbre</em> of soft sounds compared to loud sounds without relying on changing the volume of samples alone. It needed to be on budget in terms of download size, and memory requirements when they’d be eventually uncompressed.</p>
<h2 id="hearing-is-believing">Hearing is Believing</h2>
<p>Here is an example from the three different ranges of a piano. The notes are C1, C3 and C5. For each note, first the key is pressed very forcefully, then very softly.</p>
<p><strong>WARNING:</strong> Can be loud, headphone users.</p>
<div class="audio-player"><audio src="http://www.earslap.com/assets/tpsoundengine/3cs.mp3" controls="controls"><p>Sorry, your browser probably doesn't recognise the audio tag somehow...</p></audio></div>
<p>If you pay attention, the difference is not <em>only</em> the loudness. The timbre of a pitch that is played softly and loudly are also quite different. Especially at the very beginning, the attack portion. For loud sounds, the attack of the note has a lot of high frequency content whereas in softer sounds, those high frequency vibrations are damped. This qualitative difference in different dynamic levels comes from the very physical construction of the piano itself. This is a huge part of what makes a piano, well a piano.</p>
<p>If the difference is not that clear (soft sounds are hard to hear), here is the same example but this time the softer sounds are volume matched to the louder ones.</p>
<div class="audio-player"><audio src="http://www.earslap.com/assets/tpsoundengine/3cs_gainmatch.mp3" controls="controls"><p>Sorry, your browser probably doesn't recognise the audio tag somehow...</p></audio></div>
<p>The sounds with softer dynamics almost have a whisper-like quality. You can’t get a scream and pass it on as a whisper merely by decreasing its volume. So for a decent piano, you can’t simply get away with adjusting the loudness of sound based on input velocity, you need to alter the frequency content too. But how do we do it in realtime? And do it in a web browser?</p>
<h2 id="cue-lowpass-filtering">Cue Lowpass Filtering</h2>
<p>The most obvious way to <em>kind-of</em> simulate the meat of what is happening above is to use a <a href="http://en.wikipedia.org/wiki/Low-pass_filter" target="_blank">lowpass filter</a> on the sound. Lowpass filter is a kind of filter that cuts the frequencies in a signal above a determined cutoff frequency. So if I lowpass a piece of sound at 500Hz for instance, the filter will dampen the frequency content of the sound above 500Hz and will allow lower frequencies to pass.</p>
<p>With a lowpass filter, you can control the cutoff frequency and this is what we will do. We want to use a single loud piano sample, and lowpass filter it during playback if a softer sound is requested. Removing frequencies is a lot easier than adding them, that’s why we are starting with a loud sample, so we can carve out the already existing high frequency content from it when desired.</p>
<p>The WebAudio implementation in all major and recent web browsers include a built-in lowpass filter using the <a href="https://developer.mozilla.org/en-US/docs/Web/API/AudioContext/createBiquadFilter" target="_blank">AudioContext.createBiquadFilter()</a> interface (and also highpass, bandpass <a href="https://developer.mozilla.org/en-US/docs/Web/API/BiquadFilterNode">and many others</a>). Even if they didn’t include it, we could build one ourselves with <a href="https://developer.mozilla.org/en-US/docs/Web/API/ScriptProcessorNode" target="_blank">ScriptProcessorNode</a> in javascript (a decent lowpass filter is a few lines of code once you have the filter algorithm), but it would be a lot less efficient because the native WebAudio nodes are implemented in native code inside the browser.</p>
<p>Here is some WebAudio code for implementing realtime lowpass filtering on audio nodes:</p>
<div class="highlight"><pre><code class="language-javascript" data-lang="javascript"><span class="c1">//grabbing the right AudioContext object for the browser</span>
<span class="kd">var</span> <span class="nx">AudioContext</span> <span class="o">=</span> <span class="nb">window</span><span class="p">.</span><span class="nx">AudioContext</span> <span class="o">||</span> <span class="nb">window</span><span class="p">.</span><span class="nx">webkitAudioContext</span><span class="p">;</span>
<span class="cm">/*</span>
<span class="cm">actually creating the audio context. you have a global limit (even across tabs) on the number of AudioContexts in WebAudio spec and implementation for now, so you need to be careful. create one only when absolutely necessary. Always share a global instance.</span>
<span class="cm">*/</span>
<span class="nx">ctx</span> <span class="o">=</span> <span class="nx">ctx</span> <span class="o">||</span> <span class="k">new</span> <span class="nx">AudioContext</span><span class="p">();</span>
<span class="c1">//WebAudio does not currently have a native white noise generator so let's create one ourselves.</span>
<span class="nx">noise</span> <span class="o">=</span> <span class="nx">ctx</span><span class="p">.</span><span class="nx">createScriptProcessor</span><span class="p">(</span><span class="mi">4096</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">);</span>
<span class="nx">noise</span><span class="p">.</span><span class="nx">onaudioprocess</span> <span class="o">=</span> <span class="kd">function</span><span class="p">(</span><span class="nx">e</span><span class="p">)</span> <span class="p">{</span>
<span class="kd">var</span> <span class="nx">output</span> <span class="o">=</span> <span class="nx">e</span><span class="p">.</span><span class="nx">outputBuffer</span><span class="p">.</span><span class="nx">getChannelData</span><span class="p">(</span><span class="mi">0</span><span class="p">);</span>
<span class="k">for</span> <span class="p">(</span><span class="kd">var</span> <span class="nx">i</span> <span class="o">=</span> <span class="mi">0</span><span class="p">;</span> <span class="nx">i</span> <span class="o"><</span> <span class="nx">output</span><span class="p">.</span><span class="nx">length</span><span class="p">;</span> <span class="nx">i</span><span class="o">++</span><span class="p">)</span> <span class="p">{</span>
<span class="c1">//output values should be between -1 and 1</span>
<span class="nx">output</span><span class="p">[</span><span class="nx">i</span><span class="p">]</span> <span class="o">=</span> <span class="p">(</span><span class="nb">Math</span><span class="p">.</span><span class="nx">random</span><span class="p">()</span> <span class="o">*</span> <span class="mi">2</span><span class="p">)</span> <span class="o">-</span> <span class="mi">1</span><span class="p">;</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="c1">//creating the lowpass filter</span>
<span class="nx">cutoffFreq</span> <span class="o">=</span> <span class="mi">500</span><span class="p">;</span>
<span class="nx">lowpassNode</span> <span class="o">=</span> <span class="nx">ctx</span><span class="p">.</span><span class="nx">createBiquadFilter</span><span class="p">();</span>
<span class="nx">lowpassNode</span><span class="p">.</span><span class="nx">type</span> <span class="o">=</span> <span class="s2">"lowpass"</span><span class="p">;</span>
<span class="nx">lowpassNode</span><span class="p">.</span><span class="nx">frequency</span><span class="p">.</span><span class="nx">value</span> <span class="o">=</span> <span class="nx">cutoffFreq</span><span class="p">;</span>
<span class="c1">//connecting nodes together</span>
<span class="nx">noise</span><span class="p">.</span><span class="nx">connect</span><span class="p">(</span><span class="nx">lowpassNode</span><span class="p">);</span>
<span class="nx">lowpassNode</span><span class="p">.</span><span class="nx">connect</span><span class="p">(</span><span class="nx">ctx</span><span class="p">.</span><span class="nx">destination</span><span class="p">);</span> <span class="c1">//at this point, we have sound on speakers.</span>
<span class="c1">//..</span>
<span class="c1">//when you want to alter the cutoff frequency</span>
<span class="nx">newFreq</span> <span class="o">=</span> <span class="mi">1000</span><span class="p">;</span>
<span class="nx">lowpassNode</span><span class="p">.</span><span class="nx">frequency</span><span class="p">.</span><span class="nx">value</span> <span class="o">=</span> <span class="nx">newFreq</span><span class="p">;</span>
<span class="c1">//..</span>
<span class="c1">//when you want to end it all</span>
<span class="nx">lowpassNode</span><span class="p">.</span><span class="nx">disconnect</span><span class="p">();</span>
<span class="nx">noise</span> <span class="o">=</span> <span class="kc">null</span><span class="p">;</span>
<span class="nx">lowpassNode</span> <span class="o">=</span> <span class="kc">null</span><span class="p">;</span> <span class="c1">//GC will handle the rest</span></code></pre></div>
<p>In action: (<strong>WARNING:</strong> Might be loud.)</p>
<div id="lp-noise-example" style="background: rgb(74, 129, 153);padding: 5px;border-radius: 5px;">
<button id="lp-noise-playstop" style="font-family: Overlock, Arial; margin-right: 0.5em;">Play White Noise</button>
<span style="margin-right: 0.5em;color:white;">Cutoff:</span><input id="lp-noise-cutoff-slider" type="range" min="10" max="12000" step="1" value="500" style="margin-right: 0.5em;" /><span id="lp-noise-cutoff-label" style="color:white;">500Hz</span>
<select id="noise-filter-select">
<option value="lowpass">lowpass</option>
<option value="highpass">highpass</option>
<option value="bandpass">bandpass</option>
</select>
<script>
(function(r) {
r.noisePlaying = false;
r.playStop = document.getElementById('lp-noise-playstop');
r.cutoffSlider = document.getElementById('lp-noise-cutoff-slider');
r.cutoffLabel = document.getElementById('lp-noise-cutoff-label');
r.filterTypeSelect = document.getElementById('noise-filter-select');
r.lpNoiseCutoffFreq = 500;
playStop.onclick = function() {
if(!r.noisePlaying) {
var AudioContext = window.AudioContext || window.webkitAudioContext;
window.ctx = window.ctx || new AudioContext();
r.noise = window.ctx.createScriptProcessor(4096, 1, 1);
r.noise.onaudioprocess = function(e) {
var output = e.outputBuffer.getChannelData(0);
for (var i = 0; i < output.length; i++) {
output[i] = (Math.random() * 2) - 1;
}
}
r.noiseLowpassNode = r.ctx.createBiquadFilter();
r.noiseLowpassNode.type = r.filterTypeSelect.value;
r.noiseLowpassNode.frequency.value = parseInt(r.cutoffSlider.value);
r.noise.connect(r.noiseLowpassNode);
r.noiseLowpassNode.connect(ctx.destination);
playStop.innerHTML = "Stop the Noise";
noisePlaying = true;
} else
{
r.noiseLowpassNode.disconnect();
playStop.innerHTML = "Play White Noise";
noisePlaying = false;
}
}
r.cutoffSlider.oninput = function(e) {
lpNoiseCutoffFreq = cutoffSlider.value;
cutoffLabel.innerHTML = lpNoiseCutoffFreq + "Hz";
if(r.noiseLowpassNode) {
r.noiseLowpassNode.frequency.value = parseInt(cutoffSlider.value);
}
}
r.filterTypeSelect.onchange = function(e) {
if(r.noiseLowpassNode) {
r.noiseLowpassNode.type = r.filterTypeSelect.value;
}
}
})(window);
</script>
</div>
<p>In this example, we applied the lowpass filtering on a white noise signal to demonstrate the effect. Since WebAudio has no native white noise implementation right now, we created a script processor and implemented white noise ourselves.</p>
<p>I provided options for testing out some of the different filter types. Highpass is the exact opposite of lowpass, the frequencies above the cutoff frequency is passed to output with it. Bandpass filter passes frequencies around the cutoff frequency and nothing else. The full list of the available filter types are listed in this <a href="https://developer.mozilla.org/en-US/docs/Web/API/BiquadFilterNode/type" target="_blank">MDN article</a>.</p>
<p>To apply realtime lowpass to audio samples, instead of the script processor above, you’ll have an AudioBufferSourceNode that reads its data from an AudioBuffer, like the following:</p>
<div class="highlight"><pre><code class="language-javascript" data-lang="javascript"><span class="c1">//soundFile holds your mp3 file</span>
<span class="nx">ctx</span><span class="p">.</span><span class="nx">decodeAudioData</span><span class="p">(</span><span class="nx">soundFile</span><span class="p">,</span> <span class="kd">function</span><span class="p">(</span><span class="nx">buffer</span><span class="p">)</span> <span class="p">{</span>
<span class="nx">decodedBuffer</span> <span class="o">=</span> <span class="nx">buffer</span><span class="p">;</span>
<span class="p">});</span>
<span class="c1">//when decodedBuffer is ready...</span>
<span class="nx">sourceNode</span> <span class="o">=</span> <span class="nx">ctx</span><span class="p">.</span><span class="nx">createBufferSource</span><span class="p">();</span>
<span class="nx">sourceNode</span><span class="p">.</span><span class="nx">buffer</span> <span class="o">=</span> <span class="nx">decodedBuffer</span><span class="p">;</span>
<span class="nx">lowpassNode</span> <span class="o">=</span> <span class="nx">ctx</span><span class="p">.</span><span class="nx">createBiquadFilter</span><span class="p">();</span>
<span class="nx">lowpassNode</span><span class="p">.</span><span class="nx">type</span> <span class="o">=</span> <span class="s2">"lowpass"</span><span class="p">;</span>
<span class="cm">/*</span>
<span class="cm">cutoff should be derived from the velocity / loudness we want for this particular note. Louder the note, higher the cutoff frequency since we want more high frequency content in louder notes, and we want softer notes kind of muffled which means lower cutoff frequencies.</span>
<span class="cm">*/</span>
<span class="nx">lowpassNode</span><span class="p">.</span><span class="nx">frequency</span><span class="p">.</span><span class="nx">value</span> <span class="o">=</span> <span class="nx">cutoffFreq</span><span class="p">;</span>
<span class="c1">//gain node for loudness adjustment</span>
<span class="nx">gainNode</span> <span class="o">=</span> <span class="nx">ctx</span><span class="p">.</span><span class="nx">createGainNode</span><span class="p">();</span> <span class="c1">//should be createGain() for older browsers</span>
<span class="cm">/*</span>
<span class="cm">gain should be directly proportional to how loud you want the sound to be.</span>
<span class="cm">lowpass filtering already decreases the gain of your sound source (since it's a subtractive process) so use this for additional tuning of loudness.</span>
<span class="cm">*/</span>
<span class="nx">gainNode</span><span class="p">.</span><span class="nx">gain</span><span class="p">.</span><span class="nx">value</span> <span class="o">=</span> <span class="nx">targetGain</span><span class="p">;</span>
<span class="c1">//lets make the connections. source sound -> lowpass filter -> gain node -> speakers</span>
<span class="nx">sourceNode</span><span class="p">.</span><span class="nx">connect</span><span class="p">(</span><span class="nx">lowpassNode</span><span class="p">);</span>
<span class="nx">lowpassNode</span><span class="p">.</span><span class="nx">connect</span><span class="p">(</span><span class="nx">gainNode</span><span class="p">);</span>
<span class="nx">gainNode</span><span class="p">.</span><span class="nx">connect</span><span class="p">(</span><span class="nx">ctx</span><span class="p">.</span><span class="nx">destination</span><span class="p">);</span>
<span class="nx">sourceNode</span><span class="p">.</span><span class="nx">start</span><span class="p">(</span><span class="mi">0</span><span class="p">);</span> <span class="c1">//noteOn(0) for older browsers.</span></code></pre></div>
<p>The cutoff frequency of the lowpass filter needs to be tuned by ear for each sample source. You want lower cutoff frequencies for quieter sounds, and higher cutoff frequencies for louder sounds. But the velocity and pitch to cutoff frequency mapping needs manual tuning for each case. You need to try a bunch of values and tweak until it sounds good for all pitches and velocity ranges.</p>
<p>To try and see how it works in my case, go to <a href="http://touchpianist.com" target="_blank">Touch Pianist</a> site, choose a piece that doesn’t have a lot of variation in note velocity (I suggest <em>Prelude 1 in C Major</em> which can be found in <em>Bach Pack 1</em>) and instead of using the keyboard, try the range of sound qualities you get when you click / tap on the screen with your mouse. The lower you click on screen, the quieter the note will be. The cutoff frequency of the filter will be lower, and the value of the gain node will also be lower if you click on the bottom parts of the screen. If you click high, the cutoff frequency will be high, more high frequency content will pass on to the speakers.</p>
<h3 id="alternatives-to-sampling">Alternatives to Sampling</h3>
<p>There is an arguably better way of creating physical instrument sounds, something other than recording and preserving each and every sound an instrument <em>might</em> make. The technique is called <a href="http://en.wikipedia.org/wiki/Physical_modelling_synthesis" target="_blank">physical modelling synthesis</a>. PMS (heh) is an entirely procedural approach to creating instrument sounds: You figure out the mathematical formulas that govern the sound producing characteristics of a vibrating body, and run it instead. For a piano’s case, you feed the system your key press, and the system runs the formulas to create the vibrations that will make the sound happen.</p>
<p>Unfortunately for me, such an approach requires a Ph.D. in applied mathematics and years of research in the area. Also such a system requires quite a bit of processing power.</p>
<p>Still, if you are interested, I know at least one company that creates a PMS piano, and in my opinion it works extremely well. It is <a href="https://www.pianoteq.com" target="_blank">Pianoteq</a> by Modartt (I’m not affiliated with them, I just love their work). They have audio examples.</p>
<p>The PMS approach also allows you to tweak the physical properties of the instrument (from a single code base) and even lets you create instruments with plausible physics that can’t possibly be constructed in real life.</p>
<h2 id="practical-problems-with-relying-solely-on-webaudio">Practical Problems With Relying Solely on WebAudio</h2>
<p>The initial plan I had involved having multiple fallbacks in case the browser didn’t support WebAudio (e.g. Internet Explorer). First by going with the good old <audio> tags and if all else fails using Flash.</p>
<h3 id="webaudio-support">WebAudio Support</h3>
<p>After implementing the whole WebAudio version, I burned out and decided to ship Touch Pianist as fast as possible. In retrospect, I’m glad I did it this way because vast majority of my visitors (of more than a million people in 3 weeks so far) had the support in their browsers. I suppose the kinds of people that would be interested in this sort of thing, or at a computer located in a place where looking at music playing websites is appropriate, already use recent versions of Chrome or Firefox or Safari. Although caniuse.com says WebAudio should be available in some form at ~66% of the browsers out there, my analytics says that about 93% of my visitors had a browser that supported it. As it turns out, it would be a waste of time for me to spend the time on implementing inferior fallbacks (<audio> tag wouldn’t have filtering and Flash would have increased latency and worse performance).</p>
<h3 id="sample-format">Sample Format</h3>
<p>The reading of samples rely on the AudioContext.decodeAudioData() method in WebAudio API. Almost all browsers support decoding of mp3 data this way. The only exception I found for my case was Firefox on Windows. Firefox on OS X had no problem decoding mp3 data, but I just couldn’t get Firefox on Windows to decode it. So for that browser alone, I had to put separate ogg audio assets on site, and they are handled by Windows Firefox without issues.</p>
<h3 id="puzzling-garbage-collector-behaviour-of-os-x-firefox">Puzzling Garbage Collector Behaviour of OS X Firefox</h3>
<p>One of the more frustrating issues I encountered during development was the GC behavior of this one browser: Firefox on OS X.</p>
<p>The AudioBufferSourceNode interface in WebAudio is designed in a way that you just fire it and forget about it. When you call .start() (or noteOn() in older WebAudio implementations, you have to support both), unless you want to do anything else with the node (stop it prematurely, for instance) you can just forget about it (remove all references to it from your code) and the GC is supposed to handle its release from the audio graph before accumulated live nodes become an issue. You are deliberately forbidden from reusing them (calling start() on them again, for instance) so you just leave it alone after playing it and GC handles it. When you want to play the same sample again, you create a new, lightweight by design AudioBufferSourceNode and play this new instance instead of reusing the old one.</p>
<p>This works swimmingly in all browsers out there except for one: Firefox on OS X. For these nodes, the GC simply does not kick in until thousands of nodes accumulate during a playback of a piece and only when your computer is brought to its knees, the browser releases hundreds, sometimes thousands of nodes at once.</p>
<p>Playing a piece starts just fine, but these nodes (one for each pitch with their connected gain and filter nodes) pile up, CPU usage monotonically increases and framerate starts to drop, sounds start to glitch, the computer fans start whirring at max rpm, then you wait a few seconds (ranging from 10 to 60) and only then OS X Firefox decides enough is enough and releases everything. After that, life is beautiful again, but your Beethoven performance was ruined.</p>
<p>Firefox on Windows on the other hand, works beautifully. It does what you expect it to do. Creates the nodes when you ask it, and releases them before inactive nodes start being a burden. I have no idea why this happens or why the difference is there for these two platforms.</p>
<p>I just couldn’t find a workaround for this, so my apologies to Firefox users using a Mac.</p>
<h3 id="ios-issues">iOS Issues</h3>
<p>I also made iOS and then Android versions of Touch Pianist.</p>
<p>With iOS, WebAudio support and efficiency is <strong>amazing</strong>. Really. There is almost no latency, support is great, there are just no issues with it. I use WebAudio inside a WebView and the embedded webkit engine handles it amazingly.</p>
<p>…except for one thing I haven’t been able to figure out: A very little percentage of my users reported that they get <em>no sound at all</em>. From the small sample I have, this mostly happens on <em>some</em> iPads and more rarely on iPhone 6 devices running the latest version of iOS. What is more confusing is that the issue is resolved by a complete OS <strong>reboot</strong> on some, but not all devices.</p>
<p>I haven’t been able to reproduce the issue on my devices yet and still have no solutions. I found a part of Apple documentation that says sounds in WebAudio must first be initiated following a user action (making the first sound in response to a user tap) but I don’t know if this holds true for embedded webviews. I’ll include such an action in the next version to see if it will help or not. In any case this doesn’t explain the fact that on some devices the problem is fixed with an OS reboot.</p>
<p>Still, this affects, from what I can tell, a small percentage of users (I might be wrong though), and I’m sure the reason will reveal itself soon. If you have any ideas, please let me know.</p>
<h3 id="android-issues">Android Issues</h3>
<p>Audio in the fragmented Android world continues to be a <em>bag of hurt</em>. The audio latency in embedded Chromium is not acceptable, and renders the instrument unplayable. There is a significant latency between the time you tap the screen and the time sound is heard on speakers. This makes keeping the correct rhythm practically impossible.</p>
<p>For that, I had to disable WebAudio inside the webview completely and had to implement the sampler using SoundPool in Java code running in a separate thread. The implementation of SoundPool has different bugs and limitations in different devices. On some it works extremely well, and in others it just crashes and burns. Each has different polyphony limitations. And there is no efficient way to responsibly apply fade out to running sounds without cutting them abruptly other than running a Timer thread and setting the volume of different sound streams a few times each second which is very inefficient.</p>
<p>At least, in the devices that support it properly, the latency is adequate.</p>
<p>Surprisingly, Firefox Fennec engine on Android supports WebAudio a lot better than Chrome in terms of latency. Really, the latency is almost on par with webkit on iOS. I don’t know how they managed to do that but the latency is great. Unfortunately, the general graphics / javascript performance is a lot worse so the engine is unusable for my purposes.</p>
<h2 id="closing-words">Closing Words</h2>
<p>Over the last few months, I got the opportunity to get quite intimate with the WebAudio API. Flexible realtime audio inside web browsers had been my dream for a long time before WebAudio was commonplace. Two of my earlier and popular projects, <a href="www.earslap.com/page/otomata.html" target="_blank">Otomata</a> and <a href="www.earslap.com/page/circuli.html" target="_blank">Circuli</a> used Flash after Adobe brought realtime audio processing into web browsers starting with Flash Player v10. To port them to iOS though, I had to rewrite the sound engines from scratch in native code. This is the first time I’m sharing realtime sound processing code between web browser and mobile versions of apps, and my feeling is that WebAudio is <em>almost</em> ready to be used in this fashion.</p>
<p>The support is there; if you are planning to do audio experiments in web browsers, don’t be afraid to use WebAudio alone, as most (more than 90%) of your visitors will handle it just fine. Porting to mobile devices still provides some challenges but they aren’t the type of problems that are here to stay. On iOS, I had just one glitch (but a damning one, no sound in <em>some</em> devices) and Android is pushing towards low latency audio for some time already so I’m confident WebAudio will be a workable solution for all mobile devices you care about, in the near future.</p>
Touch Pianist - Magical Piano In Your Web Browser
2015-05-24T06:14:00+03:00
http://www.earslap.com//page/touch-pianist
<p>This is a work of mine I’ve been working for the past few months. Released in early May 2015, it became highly viral; the pieces presented inside was played more than a million times in mere 2 weeks.</p>
<p>It is a HTML5 web browser experiment using Canvas (optional WebGL) and WebAudio which provides a visualization of all-time popular classical piano pieces, and gives you a very addictive way of performing those pieces using your computer keyboard or touch screen. Also has iOS and Android versions at the moment with a bigger and growing music library.</p>
<p><a href="http://touchpianist.com" target="_blank">Touch Pianist site resides here.</a> (Opens in a new tab)</p>
<p>Go ahead and try it, and tell me what you think. I’ll update this page to provide links when I get around to writing articles about the tech and tools behind it.</p>
<p>Oh and here is an action video (performed on an iPad):</p>
<iframe class="youtube-vid" width="420" height="315" src="//www.youtube.com/embed/_GiMFBsAbtk" frameborder="0" allowfullscreen=""></iframe>
<p>Enjoy!</p>
Circuli - Generative Ambient Sound Instrument
2012-02-25T03:01:01+02:00
http://www.earslap.com//page/circuli
<div id="circuliswf-container">
<object classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" width="720" height="480">
<param name="movie" value="http://www.earslap.com/assets/circuli/circuli.swf" />
<!--[if !IE]>-->
<object type="application/x-shockwave-flash" data="http://www.earslap.com/assets/circuli/circuli.swf" width="720" height="480">
<!--<![endif]-->
<p>Circuli - Generative Ambient Sound Instrument.</p>
<p>You need Flash plugin to experience this content.</p>
<!--[if !IE]>-->
</object>
<!--<![endif]-->
</object>
</div>
<ul>
<li>
<p>Circles grow at a constant rate.</p>
</li>
<li>
<p>No two circles can overlap.</p>
</li>
<li>
<p>Bigger circle pushes and shrinks smaller circle when in contact.</p>
</li>
<li>
<p>A circle “pops” and makes a sound when its boundary intersects with the center of another circle.</p>
</li>
<li>
<p>The pitch of the sound is determined by the position of the circle on the background. Bigger the hole, higher the pitch.</p>
</li>
<li>
<p>The envelope of the produced sound is determined by a number of parameters including final circle size and time of interaction between two involved circles.</p>
</li>
</ul>
<p>An action video:</p>
<iframe class="youtube-vid" width="420" height="315" src="//www.youtube.com/embed/J2d23yvDPmM" frameborder="0" allowfullscreen=""></iframe>
Otomata - Generative Musical Sequencer
2011-07-16T04:01:01+03:00
http://www.earslap.com//page/otomata
<p>Click on the grid below to add cells, click on cells to change their direction, and press play to listen to your music.</p>
<div id="otoswf-container">
<object classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" width="480" height="320">
<param name="movie" value="http://www.earslap.com/assets/otomata/iotomata.swf" />
<!--[if !IE]>-->
<object type="application/x-shockwave-flash" data="http://www.earslap.com/assets/otomata/iotomata.swf" width="480" height="320">
<!--<![endif]-->
<p>Otomata - Generative music sequencer.</p>
<p>You need Flash plugin to experience this content.</p>
<!--[if !IE]>-->
</object>
<!--<![endif]-->
</object>
</div>
<p>Update: <a href="http://batu.in/iotomata">Click here</a> to get Otomata for your iPhone / iPod / iPad!</p>
<p>Official facebook page: <a href="http://www.facebook.com/pages/Otomata/218837764796473">http://www.facebook.com/pages/Otomata/218837764796473</a></p>
<p>Also this reddit page has many examples:
<a href="http://batu.in/otoreddit">http://batu.in/otoreddit</a></p>
<p>And there is a subreddit for Otomata:
<a href="http://www.reddit.com/r/otomata/">http://www.reddit.com/r/otomata/</a></p>
<h2 id="what-is-this-anyway">What is this anyway?</h2>
<p>Otomata is a generative sequencer. It employs a cellular automaton type logic I’ve devised to produce sound events.</p>
<p>Each alive cell has 4 states: Up, right, down, left. at each cycle, the cells move themselves in the direction of their internal states. If any cell encounters a wall, it triggers a pitched sound whose frequency is determined by the xy position of collision, and the cell reverses its direction. If a cell encounters another cell on its way, it turns itself clockwise.</p>
<p>This set of rules produces chaotic results in some settings, therefore you can end up with never repeating, gradually evolving sequences. Go add some cells, change their orientation by clicking on them, and press play, experiment, have fun.</p>
<p>If you encounter something you like, just press “Copy Piece Link” and save it somewhere, or better, share it!</p>
<p>Here is something from me to start with:</p>
<p><a href="http://www.earslap.com/projectslab/otomata?q=0d6p224s4v508n7n6012">http://www.earslap.com/projectslab/otomata?q=0d6p224s4v508n7n6012</a></p>
<p>And here is an action video:</p>
<iframe class="youtube-vid" width="420" height="315" src="//www.youtube.com/embed/k8EfRXihiWg" frameborder="0" allowfullscreen=""></iframe>
<p>Here are replies to some common questions:</p>
<p><strong>Q:</strong> Will you add feature X?</p>
<p><strong>A:</strong> I really strived for simplicity for this instrument. there are a lot of things that can be added, but I don’t want to clutter things. The ability to change the scale that is used and ability to change the overall tempo is already added in the second release. Always open to suggestions.</p>
<p><strong>Q:</strong> MIDI Output? OSC output?</p>
<p><strong>A:</strong> I will look into my possibilities for doing this. I will make a standalone version of this at some point which will emit OSC and MIDI. A VST and AU version might follow. These will take time. Sorry.</p>
<p><strong>Q:</strong> What did you use to program it?</p>
<p><strong>A:</strong> I programmed this with the HaXe programming language (it is awesome, look it up). I wrote a DSP library with HaXe and programmed this to try it actually. So far it works nicely!</p>
<p><strong>Q:</strong> Why can’t I load pieces from other peoples’ links?</p>
<p><strong>A:</strong> You most probably have JavaScript disabled.</p>
<p><strong>Q:</strong> Will you open source it?</p>
<p><strong>A:</strong> I will open source the HaXe DSP library I used to program this. I might open source the whole thing while I’m at it. Also the code needs a bit of clean-up to be meaningful.</p>
<p><strong>C:</strong> I found this bug where the cells move in circles. I broke it lol!</p>
<p><strong>A:</strong> Nope it is not a bug. They are called oscillators. Use them to your benefit! Try this.</p>
<p><strong>Q:</strong> Can I use the output in my own piece, am I allowed to do that?</p>
<p><strong>A:</strong> By all means do so! Hell do it even if I said no. I’d love it if you give some sort of attribution, still cool if you don’t do it.</p>
<p><strong>C:</strong> You are a filthy liar! You can’t create “never repeating” patterns with a system whose state space is finite!</p>
<p><strong>A:</strong> You are right. I should have said “(practically) never repeating” above. But please do the math, it is possible (but not proven) that there might exist some configurations where the exact repetition would take (billions of billions of times) longer than the known age of our universe. I am not a mathematician by any means, so that is as far as infinity goes for me (I also believe that light travels in infinite speed in a vacuum, come at me bro! I am a digital being, speed of light is my universe’s sampling rate). That said, I can see how my exact wording would make you cringe, sorry about that!</p>
Disquiet Interview
2011-07-16T04:01:01+03:00
http://www.earslap.com//weblog/disquiet-interview
<p>Marc Weidenbaum of Disquiet Interviewed Me About Otomata and Generative Art.</p>
<p><a href="http://disquiet.com/2011/05/17/batuhan-bozkurt-otomata-earlsap/">Read from here.</a></p>
FingerNeedle Released
2010-03-25T03:01:01+02:00
http://www.earslap.com//weblog/fingerneedle-released
<p>I’ve finished the code cleanup and documentation for an instrument which I’ve announced before on sc-users mailing list. It was named as TouchNet, but I’ve decided to separate the “Touch sampler” and “Net” parts so now it’s called “FingerNeedle”. Download link is at the bottom of this post.</p>
<p>A video performance from the TouchNet days (with my FreeSound Quark), watch it in fullscreen please:</p>
<iframe class="vimeo-vid" src="//player.vimeo.com/video/8344674" width="500" height="374" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe>
<p>FingerNeedle is a gesture based instrument that is operated from a multi-touch enabled surface. In essence, it is a sampler instrument which converts sounds into square images and the performer plays rectangular portions from these images with touch. The images derived from the sounds give visual feedback to the performer about where to touch on the surface and what portion of the sound to use as a source. Unlike the standard waveform visualization, the performer is able to visually estimate the spectral characteristics of a sound and its change over time from the created image. FingerNeedle allows usage of several layers of sounds to be controlled and played back simultaneously and borrows the idea of “gesture recording” that is also present in my <a href="/weblog/dequencher-a-live-sequencer.html">deQuencher</a> instrument. The recorded gestures can be post processed to be slowed down and sped up dynamically.</p>
<p>The system is able to load uncompressed 16 bit mono sounds with any sample rate. A single sample in a 16 bit sound file can have 2^16 distinct values. The conversion system codes this range to shades of gray, where the lowest possible value is black and the highest possible value is white (loosely analogous with the grooves on a gramophone record), silence is pure gray, and a full amplitude sine wave is a continuous gradient between the shades of gray. Therefore, every pixel represents a single sample of a sound file. The samples are arranged inside the image in left to right, top to bottom order sequentially. This means that an image window with 800×800 resolution can contain 640000 samples (and pixels), which equals to approximately 14.5 seconds of mono audio at 44.1kHz sampling rate.</p>
<p>This visualization scheme allows one to predict the spectra as well as the dynamic range from the image before hearing it the first time. This visualization scheme aids the performance and also defines the interaction method with the instrument.</p>
<p>When a touch event is sensed, the instrument gets the blob size and multiplies its width and height with dynamically changeable multipliers. After that, a transparent rectangle becomes visible on the sound-image. The instrument loops / plays the highlighted portion of the sound from the preloaded buffer.</p>
<p>To play a rectangular region from a buffer, I’ve developed a unit generator called “NeedleRect” that gets x / y, width and height input and its output is used as an index for another unit reading from a preloaded buffer.</p>
<p>FingerNeedle currently requires:</p>
<ul>
<li>A MacBook / MacBook Pro with a multitouch trackpad.</li>
<li>A recent copy of BatUGens from sc-plugins project which includes the NeedleRect UGen. (The current binaries listed on the sc-plugins site do not include this UGen. You’ll need to compile it from source.)</li>
<li>BatLib Quark</li>
<li>MultiTouchPad Quark</li>
<li>And recommended for fun: FreeSound Quark.
—</li>
</ul>
<p>Download FingerNeedle from my github SCThings page:</p>
<p><a href="http://github.com/earslap/SCThings/downloads">http://github.com/earslap/SCThings/downloads</a></p>
<p>Please let me know if you try it and encounter any problems.</p>
NatureToolkit Update
2009-12-01T03:01:01+02:00
http://www.earslap.com//weblog/naturetoolkit-update
<p>I’ve updated my <a href="/weblog/new-quark-naturetoolkit.html">NatureToolkit</a> Quark with two new classes. GAWorkbench and GAPmatch. As usual, full documentation is provided with examples so be sure to check them out.</p>
<p>GAWorkbench
This is a general purpose and modular <a href="http://en.wikipedia.org/wiki/Genetic_algorithm">genetic algorithm</a> class for SuperCollider. Genetic algorithm is a search technique used for finding approximate solutions to search and optimization problems. They use techniques derived from evolutionary biology like inheritance, mutation, (artificial) selection and crossover.</p>
<p>GAWorkbench is a general purpose class that provides an easy interface to interact with your data to be used with GA’s. It can be used for solving general most probably music related computational search problems and can also be used for exploring sound spaces, finding suitable parameters for synthesis networks etc.</p>
<p>GAPMatch
GAPmatch is a fast parameter estimation system that intends to work at near-interactive speeds to aid sound design and maybe live performances. To achieve this, it uses a fast search technique called “Genetic Algorithm” which mimics the evolutionary continuum that we observe in nature. It wraps a GAWorkbench instance (described above) for this specific purpose.</p>
<p>In terms of its functionality in the SC environment, parameter estimation means this: You have a SynthDef that is known to be capable of synthesizing a particular type of sound, but finding the right parameters is a time consuming and in some situations a tedious task. Or you have built a raw SynthDef for a synthesis algorithm which exposes a few dozens of parameters and it’s impossible to conduct them at that state. Or you are trying to tune a physical modelling instrument… Situations can vary. This parameter estimation system basically takes an input sound, an input SynthDef and its parameter ranges, and it tries to find the parameters for this particular synthesizer so that the output of the synth is faithful to the attributes of your input sound. From that stage on, you can alter the parameters by yourself to change the synthesized sound in various ways.</p>
<p>Execution of genetic algorithms demands heavy computing resources, and this class can parallelize the task (the fitness calculation stage) across multiple scservers on your local machine (to utilize multiple CPU cores on your machine) and networked machines.</p>
<p>An article I’ve submitted for peer review that describes the inner workings of the system in depth will be provided when I have the permission to do so. Until then, the documentation and sources should provide necessary info.</p>
<p>(You need to have BatUGens and BatPVUgens installed from sc3-plugins for this to work)</p>
<p>I’ve also gone ahead and recorded a little screencast to introduce GAWorkbench and show GAPmatch it in action:</p>
<iframe class="vimeo-vid" src="//player.vimeo.com/video/7908757" width="500" height="374" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe>
<p>Let me know what you think of it.</p>
Music Release: Laconicism
2009-11-29T03:01:01+02:00
http://www.earslap.com//weblog/music-release-laconicism
<p><em>Laconicism</em> is a collection of procedural and interactive sound compositions.</p>
<p>The pieces are not finalized onto a static medium.</p>
<p>Instead, the collection is distributed as a computer software so that the works can be experienced in their intended multidimensional forms.</p>
<h2 id="downloads">Downloads</h2>
<p><strong>Mac OS X:</strong> <a href="http://www.earslap.com/assets/laconicism/Laconicism.dmg">Download from here.</a> Tested on Leopard and Snow Leopard. Should also work on Tiger (and on older PPC machines, though not tested).</p>
<p><strong>Other platforms:</strong> Download SuperCollider source files <a href="http://www.earslap.com/assets/laconicism/Laconicism_src_v1.zip">from here</a>. Unfortunately, Laconicism isn’t available as a standalone application for other platforms. See the ReadMe file inside the source archive for instructions on how to install and run. Some of the pieces do not work as intended on Windows yet, hopefully these will be fixed as SuperCollider matures further on the Windows platform.</p>
<p>It looks and feels like:</p>
<iframe class="vimeo-vid" src="//player.vimeo.com/video/7875283" width="500" height="374" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe>
<blockquote>
<p>This collection of sound entities are presented to you with a simple idea in mind: “Organized Sound”, once realized by its creator for distribution, does not necessarily have to be locked to definite micro or macro event sequences in time domain. This apparent rigidity of distributed sound is in fact, a “transmission loss” between the composer and listener; and is primarily caused by the limitations of our soon to be obsolete old and static sound distribution mediums.</p>
<p>The works presented in this software are compositions and/or designs of “sound processes”, which provide a recipe for computers to generate sounds utilizing various sound synthesis techniques on the fly. These are not designs of “exactness and perfection”, instead, I merely define limits for a sound-event space.</p>
<p>The listener is not only free to experience the process compositions presented by the composer, but can also participate. Each piece has different number of controls (embedded into the synthesis graphs in a “circuit-bending” fashion), whose functions are not really obvious until you start to play with them. The listener is free to observe other dimensions of the event space by altering the values of these controls, relying on listening and intuition (a feedback loop). The aforementioned transformation loss disappears, and another form of communication emerges between the composer, listener and the piece.</p>
<p>© Batuhan Bozkurt – 2009
This work (music) is distributed under CC BY-NC-SA 3 license:</p>
<p><a href="http://creativecommons.org/licenses/by-nc-sa/3.0/">http://creativecommons.org/licenses/by-nc-sa/3.0/</a></p>
</blockquote>
New Quark: NatureToolkit
2009-11-23T03:01:01+02:00
http://www.earslap.com//weblog/new-quark-naturetoolkit
<p>NatureToolkit is a Quark that will include classes and frameworks that will hopefully make life easier for those, whose artistic media work tends to get inspiration from natural processes.</p>
<p>The only included class right now is LSys (but there is more to come! See bottom of this post…) which is a complete Lindenmayer Systems implementation for SuperCollider. There are various other string rewriting systems available for SC right now, but details of real L-Systems are more complicated. The rewriting system needs to be aware of branching points when context sensitivity comes into play for example, and this class supports them. In short, this class does parallel rewriting, supports context free, context sensitive, stochastic and parametric rulesets. When working with context sensitive rules in a bracketed L-system, this class takes axial node points and segment neighborhood into account (which is not represented in the sequential string representation naturally). Full documentation is included and there is an accompanying LSPlant class that interprets the produced strings using the standard Turtle Graphics method so it helps you visualize the produced data.</p>
<p>I’m more interested in the musical applications of L-Systems. The self similarity on various domains of artistic works is often overlooked, but it can be observed in many time scales with detailed inspection (or sometimes it is very obvious and taken for granted; the fugue form / technique is a good starting point).</p>
<p>For an intro on L-Systems, consult here first, then I suggest you grab a copy of Algorithmic Beauty of Plants (Przemyslaw Prusinkiewicz – Aristid Lindenmayer) from here.</p>
<p>For musical applications of L-systems, I suggest you go read the master’s thesis of Stelios Manousakis: Musical L-Systems that discusses applications of L-Systems down from sample scale to macro scales. There also is a body of past research on this area, you may want to search some databases.</p>
<p>Here is some not-so-interesting visualizations (the code for these is also included in the documentation) that I’ve created while I was developing the class to see if things were really working as they should.</p>
<h2 id="context-free-examples">Context free examples:</h2>
<p>Islands and Lakes from Algorithmic Beauty of Plants Fig 1.8:</p>
<p><a href="http://www.earslap.com/assets/ntoolkit/0.png"><img src="http://www.earslap.com/assets/ntoolkit/0.png" alt="" /></a></p>
<p>Quadratic Snowflake from A.B.O.P. Fig 1.7b:</p>
<p><a href="http://www.earslap.com/assets/ntoolkit/1.png"><img src="http://www.earslap.com/assets/ntoolkit/1.png" alt="" /></a></p>
<h2 id="branching-examples">Branching Examples:</h2>
<p>Tree from A.B.O.P. Fig 1.24c:</p>
<p><a href="http://www.earslap.com/assets/ntoolkit/2.png"><img src="http://www.earslap.com/assets/ntoolkit/2.png" alt="" /></a></p>
<p>Tree from A.B.O.P. Fig 1.24f:</p>
<p><a href="http://www.earslap.com/assets/ntoolkit/3.png"><img src="http://www.earslap.com/assets/ntoolkit/3.png" alt="" /></a></p>
<h2 id="stochastic-branches">Stochastic Branches:</h2>
<p>Stochastic rules lets you define a single string and stochastic rules which create different yet coherent products each time. The following 3 trees were all generated from the same axiom and rules (A.B.O.P. Fig. 1.27):</p>
<p><a href="http://www.earslap.com/assets/ntoolkit/4.png"><img src="http://www.earslap.com/assets/ntoolkit/4.png" alt="" /></a></p>
<p><a href="http://www.earslap.com/assets/ntoolkit/5.png"><img src="http://www.earslap.com/assets/ntoolkit/5.png" alt="" /></a></p>
<p><a href="http://www.earslap.com/assets/ntoolkit/6.png"><img src="http://www.earslap.com/assets/ntoolkit/6.png" alt="" /></a></p>
<h2 id="parametric-systems">Parametric Systems:</h2>
<p>Parametric L-Systems allows you to define axioms with parametric arguments and the parallel rewriting system makes them interact. Really complex yet self-similar forms can be crafted with them! The following 5 images are taken from 1 to 5 iterations of the classic “Row of Trees” example. A simple axiom can yield to complex self similar structs:</p>
<p><a href="http://www.earslap.com/assets/ntoolkit/7.png"><img src="http://www.earslap.com/assets/ntoolkit/7.png" alt="" /></a></p>
<p><a href="http://www.earslap.com/assets/ntoolkit/8.png"><img src="http://www.earslap.com/assets/ntoolkit/8.png" alt="" /></a></p>
<p><a href="http://www.earslap.com/assets/ntoolkit/9.png"><img src="http://www.earslap.com/assets/ntoolkit/9.png" alt="" /></a></p>
<p><a href="http://www.earslap.com/assets/ntoolkit/10.png"><img src="http://www.earslap.com/assets/ntoolkit/10.png" alt="" /></a></p>
<p><a href="http://www.earslap.com/assets/ntoolkit/11.png"><img src="http://www.earslap.com/assets/ntoolkit/11.png" alt="" /></a></p>
<h2 id="context-sensitive-l-systems-with-brackets-branches">Context Sensitive L-Systems with Brackets (Branches):</h2>
<p>This is where the string rewriting mechanism should take axial nodes and branch neighborhood into account to work properly. Really sophisticated structures are possible by using signal propagation mechanisms of L-Systems.</p>
<p>Examples are Fig 1.31 (a, b, c, d) from A.B.O.P:</p>
<p><a href="http://www.earslap.com/assets/ntoolkit/12.png"><img src="http://www.earslap.com/assets/ntoolkit/12.png" alt="" /></a></p>
<p><a href="http://www.earslap.com/assets/ntoolkit/13.png"><img src="http://www.earslap.com/assets/ntoolkit/13.png" alt="" /></a></p>
<p><a href="http://www.earslap.com/assets/ntoolkit/14.png"><img src="http://www.earslap.com/assets/ntoolkit/14.png" alt="" /></a></p>
<p><a href="http://www.earslap.com/assets/ntoolkit/15.png"><img src="http://www.earslap.com/assets/ntoolkit/15.png" alt="" /></a></p>
<p><a href="http://www.earslap.com/assets/ntoolkit/16.png"><img src="http://www.earslap.com/assets/ntoolkit/16.png" alt="" /></a></p>
<h2 id="other-notes">Other notes:</h2>
<p>As mentioned earlier, the examples here are Turtle Graphics interpretations of generated strings, as L-Systems were first developed for algorithmically generating (and observing growth of) plant structures. Actually this is the direct visual representation of generated strings, but one can use it in many different contexts or visualizations. I’m especially interested on their applications for musical form and structure, the generated strings are really easy to interpret and use on different contexts. I hope others will also find them of use, you can get NatureToolkit from the SC Quarks repository, as usual.</p>
<h2 id="more-to-come">More to Come:</h2>
<p>NatureToolkit is far from complete with an L-System implementation of course. I’ve also developed a general purpose fully modular Genetic Algorithm framework for SC as well as an automatic parameter matching system (finding suitable parameters (inferred from an input sound) for a given synthesizer topology) utilizing the framework (the parameter matching system can use multiple processing cores of a computer and can also parallelize the tasks across multiple computers for analysis so it is also suitable for real-time use yay!), but the documentation isn’t complete yet and it is accompanied by a paper I’ve submitted to EvoMUSART 2010 conference (European Event on Evolutionary and Biologically Inspired Music, Sound, Art and Design) and I’m not allowed to publish it publicly yet. You can still have a test drive if you are interested, just get the sources from my github. I’ll be happy to give support for the adventurous spirits out there.</p>
New Quark: MultiTouchPad
2009-11-08T03:01:01+02:00
http://www.earslap.com//weblog/new-quark-multitouchpad
<p>I’ve added a new Quark to SuperCollider Quarks repository that allows you to access the multitouch data from supported MacBook (and MacBook Pro) touchpads. It needs the <a href="http://github.com/earslap/tongsengmod">tongsengmod</a> application I’ve forked from the <a href="http://github.com/fajran/tongseng">tongseng</a> project, but once tongsengmod is installed, integration is seamless. The details can be found in the help file. Here is a rudimentary example (the example code for this can also be found in the help file):</p>
<iframe class="vimeo-vid" src="//player.vimeo.com/video/7498218" width="500" height="374" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe>
<p>I’ve only tested this under Leopard 10.5.8 with a (late 2008) unibody MacBook Pro. Your mileage may vary under Snow Leopard and older MacBooks (I’m pretty sure that the latest MacBooks before the unibody models are also capable of using this), so please let me know if you encounter any problems, or get it to run on other systems.</p>
deQuencher - A Live Sequencer
2009-08-27T04:01:01+03:00
http://www.earslap.com//weblog/dequencher-a-live-sequencer
<p>deQuencher is a live sequencing tool. The idea behind it is that, instead of having separate and isolated layers for musical events, you have your sound generators and gates / triggers / parameter changes share the same canvas (and layer), and you interact with your “objects” on that canvas to express your musical ideas on time domain.</p>
<p>The program records your mouse gestures and then plays them back continuously, and your objects interact based on distance between each other. For now, there are 6 types of objects.</p>
<p>They are:</p>
<ul>
<li>Synth (sound generators: parameter / trigger in, audio out),</li>
<li>Fx (effects units: audio in, audio out),</li>
<li>Parameter (sends parameters based on proximity: no input, parameter out),</li>
<li>Trigger (sends triggers when close enough to a synth / fx object: no input, trigger out)</li>
<li>Chaining Fx (effects units that can be chained between each other. audio in / audio out)</li>
<li>Freesound sampler (downloads samples from freesound based on the keyword you give, and allows you to use that sound in your improvisations realtime.)</li>
</ul>
<p>Once you record your gesture, you have the ability to post-process recorded movement, you can move the object(add offset), you can scale the movement, you can also clip the playback from its end so that the object will jump back to its origin earlier. Also, proximity thresholds for objects can be set.</p>
<p>The program is an interface for <a href="http://supercollider.sourceforge.net/">SuperCollider</a>, so it uses SC as its sound engine. Usage is quite easy but getting started with it is not so straightforward so be sure to check the docs and tutorial out before attempting to do something with it. Comments and suggestions are always welcome.</p>
<h2 id="new-version">New version!</h2>
<p>Current Universal Binary package for Macs (docs included): <a href="http://www.earslap.com/assets/dequencher/dq_v0.2.1.dmg">dq_v0.2.1.dmg</a></p>
<p>And here is the sources: <a href="http://www.earslap.com/assets/dequencher/dq_v0.2.1src.tar.gz">dq_v0.2.1src.tar.gz</a></p>
<p>The Documentation and tutorial is included in the dmg.</p>
<h2 id="whats-new">What’s new?</h2>
<p>The current version is a complete rewrite of the application with C++ using the great <a href="http://www.openframeworks.cc/">openframeworks</a> library. There are some new features like arbitrary chaining of Fx agents and <a href="http://www.freesound.org/">freesound</a> integration. Check the documentation and tutorial out to see what they are all about!</p>
<h2 id="old-processing-based-version">Old, processing based version:</h2>
<p>I’m no longer hosting the old processing version’s documentation here. However if you are interested, you can download the whole package for the old version which includes the docs <a href="http://www.earslap.com/assets/dequencher/dqold.zip">from here</a>.</p>
<p>To see a video of deQuencher in action, you can look here:</p>
<iframe class="vimeo-vid" src="//player.vimeo.com/video/938314" width="500" height="374" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe>
<p>This performance was made with the old processing version of dQ so not all of the functionality of the current version is used there.</p>
Recreating the THX Deep Note
2009-07-25T04:01:01+03:00
http://www.earslap.com//article/recreating-the-thx-deep-note
<p>If you’ve ever watched a movie in a movie theater, chances are that you are familiar with the <a href="http://en.wikipedia.org/wiki/Deep_Note">Deep Note</a>, the <a href="http://en.wikipedia.org/wiki/Audio_logo">audio logo</a> of <a href="http://www.thx.com/">THX</a>. That sound is one of the first sounds we hear at the beginning of movie trailers in a THX-certified venue. I’ve always been fascinated with that great distinctive crescendo, starting from an eerie cluster of tones and ending with a full range bright and grand finale. What an ear treat!</p>
<p>Yesterday, (probably) out of nowhere, the origins of that sound tickled my curiosity and I did some little research. I’m deeply moved by the history behind it, and I want to share what I’ve learned with you. Then we will move on to create that sound ourselves, get your scissors ready, and some glue!</p>
<p>The best source of information I could find about the sound, which I think is a complete electro-acoustic composition, is from the great <a href="http://musicthing.blogspot.com/">Music Thing Blog</a>. It is from a blog post from 2005. <a href="http://musicthing.blogspot.com/2005/05/tiny-music-makers-pt-3-thx-sound.html">The link to the post is here.</a></p>
<p>So here is some trivia:</p>
<ul>
<li>
<p>It was made by <a href="http://en.wikipedia.org/wiki/James_A._Moorer">Dr. James Andy Moorer</a> in 1982.</p>
</li>
<li>
<p>At one point in history, it was being played 4000 times a day, almost every 20 seconds! A quote from Dr. Moorer:</p>
</li>
</ul>
<blockquote>
<p>“I like to say that the THX sound is the most widely-recognized piece of <a href="http://en.wikipedia.org/wiki/Computer-generated_music">computer-generated music</a> in the world. This may or may not be true, but it sounds cool!”</p>
</blockquote>
<ul>
<li>
<p>It was generated in a mainframe computer called ASP (Audio Signal Processor) which was able to synthesize the sound in real-time.</p>
</li>
<li>
<p>It took 20000 lines of C code to write a program that generated the score for the ASP to play. The generated score was consisting of 250000 lines of statements which were to be obeyed by the ASP.</p>
</li>
<li>
<p>The oscillators used as the voices are using a digitized cello tone as a waveform. Dr. Moorer recalls the waveform as having something around 12 audible harmonics.
The ASP was able to run 30 oscillators of this sort in real-time. (In comparison, the notebook computer I happen to be using right now can handle more than 1000 of them without a glitch).</p>
</li>
<li>
<p>The sound itself is copyrighted, but here is a problem: The code Dr. Moorer wrote has generative characteristics (i.e. it relies on random number generators), so each time you generate a score and feed the resulting statements to the ASP, the generated sound is somewhat different. So I don’t really think that it is safe to say that the process itself is or can be “copyrighted”. The sound itself, yeah that one is covered.</p>
</li>
<li>
<p>It debuted in the THX trailer of the <a href="http://en.wikipedia.org/wiki/Return_of_the_Jedi">Return of the Jedi</a> before its premiere in 1983.</p>
</li>
<li>
<p>The generative characteristics of the process became troublesome at some point. After the release of the piece with “Return of the Jedi”, the original recording of the Deep Note was lost. Dr. Moorer recreated the piece for the company but they kept complaining that it wasn’t sounding the same as the original one. Eventually, the original recording was found and kept in a safer place from then.</p>
</li>
<li>
<p><a href="http://en.wikipedia.org/wiki/Dr_dre">Dr. Dre</a> asked for permission to sample the sound for his music but was turned down. He used it anyway and got sued.</p>
</li>
<li>
<p><a href="http://en.wikipedia.org/wiki/Metastasis_%28Xenakis_composition%29">Metastasis</a> which is an <a href="http://en.wikipedia.org/wiki/Iannis_Xenakis">Iannis Xenakis</a> composition (1954) has a very similar opening crescendo (among other works by various composers). It starts with a single tone instead, and lands on a semi dissonant tone cluster instead of a consonant one as in Deep Note.
The sound recording used for the patent application can be listened from here:</p>
</li>
</ul>
<p><a href="http://www.uspto.gov/go/kids/soundex/74309951.mp3">http://www.uspto.gov/go/kids/soundex/74309951.mp3</a></p>
<p>Be sure to listen the sound because we will be referring to that particular recording when we have a go at recreating the Deep Note. You may also listen other instances of this piece: <a href="http://www.thx.com/cinema/trailers.html">http://www.thx.com/cinema/trailers.html</a></p>
<p>Here is some technical/theoretical trivia before we start synthesizing:</p>
<ul>
<li>
<p>This is my observation: The original recording at United States Patent and Trademark Office website has a fundamental pitch which just stands between D and Eb, but the newer renditions on more recent features has a fundamental between E and F. I’ll use the original D/Eb fundamental in my recreation attempt. The newer stuff is usually shorter if I’m not mistaken. It’s clearly obvious that I like the one submitted to the patent office better.</p>
</li>
<li>
<p>According to Dr. Moorer (and also confirmed by my ears) the piece starts with oscillators tuned to random frequencies between 200Hz and 400Hz. But the oscillators are not simply buzzing their sounds, their frequencies are modulated randomly, and they use smoothers to smooth out the random pitch transitions. This goes on until the crescendo is initiated later in the piece.</p>
</li>
<li>
<p>Inside the crescendo and in the final landing sound, the randomizers are still modulating the frequencies of the oscillators, so no oscillator is stable at any given moment. But the random sweep range is narrow so it merely adds an organic / chorus-y feel to the sound sources.</p>
</li>
<li>
<p>Dr. Moorer recalls the digitized cello sound had around 12 audible harmonics in its spectrum.</p>
</li>
<li>
<p>To my knowledge, the written score (which was used to get the copyright) was never released, but Dr. Moorer says he can supply the score if we can get permission from THX, though I think that is not really necessary for an attempt at recreating the piece.</p>
</li>
<li>
<p>The final landing sound (technically not a chord) is just stacked up octaves of the fundamental, to my ears. So when recreating the piece, we will start with randomly tuned (between 200Hz and 400Hz) oscillators, make a semi-sophisticated sweep and land at stacked up octaves on a fundamental that sits between low D/Eb.</p>
</li>
</ul>
<p>So let’s get going. SuperCollider is my tool of choice here. I start with a simple waveform. I want to use a sawtooth wave as the oscillator source, it has a rich and harmonic spectrum consisting of even and odd partials. I’ll want to filter the upper partials later on. Here is some beginning code:</p>
<div class="highlight"><pre><code class="language-javascript" data-lang="javascript"><span class="c1">//30 oscillators together, distributed across the stereo field</span>
<span class="p">(</span>
<span class="p">{</span>
<span class="kd">var</span> <span class="nx">numVoices</span> <span class="o">=</span> <span class="mi">30</span><span class="p">;</span>
<span class="c1">//generating initial random fundamentals:</span>
<span class="kd">var</span> <span class="nx">fundamentals</span> <span class="o">=</span> <span class="p">{</span><span class="nx">rrand</span><span class="p">(</span><span class="mf">200.0</span><span class="p">,</span> <span class="mf">400.0</span><span class="p">)}</span><span class="o">!</span><span class="nx">numVoices</span><span class="p">;</span>
<span class="nx">Mix</span>
<span class="p">({</span><span class="o">|</span><span class="nx">numTone</span><span class="o">|</span>
<span class="kd">var</span> <span class="nx">freq</span> <span class="o">=</span> <span class="nx">fundamentals</span><span class="p">[</span><span class="nx">numTone</span><span class="p">];</span>
<span class="nx">Pan2</span><span class="p">.</span><span class="nx">ar</span>
<span class="p">(</span>
<span class="nx">Saw</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="nx">freq</span><span class="p">),</span>
<span class="nx">rrand</span><span class="p">(</span><span class="o">-</span><span class="mf">0.5</span><span class="p">,</span> <span class="mf">0.5</span><span class="p">),</span> <span class="c1">//stereo placement of voices</span>
<span class="nx">numVoices</span><span class="p">.</span><span class="nx">reciprocal</span> <span class="c1">//scale the amplitude of each voice</span>
<span class="p">)</span>
<span class="p">}</span><span class="o">!</span><span class="nx">numVoices</span><span class="p">);</span>
<span class="p">}.</span><span class="nx">play</span><span class="p">;</span>
<span class="p">)</span></code></pre></div>
<p>I chose to have 30 oscillators for sound generation, congruent with the capabilities of the ASP computer as reported by Dr. Moorer. I’ve created an array of 30 random frequencies between 200Hz and 400Hz, distributed them randomly across the stereo field with Pan2.ar and with the argument rrand(-0.5, 0.5), assigned the freqs to the sawtooth oscillators (30 instances). Here is how it sounds:</p>
<div class="audio-player"><audio src="http://www.earslap.com/assets/thxdeepsound/sound1.mp3" controls="controls"><p>Sorry, your browser probably doesn't recognise the audio tag somehow...</p></audio></div>
<p>Now if we examine the info provided by Dr. Moorer, and/or listen closely to the original piece, we can hear that the pitches of the oscillators drift up and down randomly. We want to add this for a more organic feel. The frequency scale is logarithmic, so lower frequencies should have narrower wobbling ranges than higher frequencies. We can implement it by sorting our randomly generated frequency values, and assigning LFNoise2 (which generates quadratically interpolated random values) mul arguments in order inside our Mix macro. And I also added a lowpass filter for the oscillators whose cutoff frequencies are 5 * freq of oscilator with moderate 1/q:</p>
<div class="highlight"><pre><code class="language-javascript" data-lang="javascript"><span class="c1">//adding random wobbling to freqs, sorting randoms, lowpassing</span>
<span class="p">(</span>
<span class="p">{</span>
<span class="kd">var</span> <span class="nx">numVoices</span> <span class="o">=</span> <span class="mi">30</span><span class="p">;</span>
<span class="c1">//sorting to get high freqs at top</span>
<span class="kd">var</span> <span class="nx">fundamentals</span> <span class="o">=</span> <span class="p">({</span><span class="nx">rrand</span><span class="p">(</span><span class="mf">200.0</span><span class="p">,</span> <span class="mf">400.0</span><span class="p">)}</span><span class="o">!</span><span class="nx">numVoices</span><span class="p">).</span><span class="nx">sort</span><span class="p">;</span>
<span class="nx">Mix</span>
<span class="p">({</span><span class="o">|</span><span class="nx">numTone</span><span class="o">|</span>
<span class="c1">//fundamentals are sorted, so higher frequencies drift more.</span>
<span class="kd">var</span> <span class="nx">freq</span> <span class="o">=</span> <span class="nx">fundamentals</span><span class="p">[</span><span class="nx">numTone</span><span class="p">]</span> <span class="o">+</span> <span class="nx">LFNoise2</span><span class="p">.</span><span class="nx">kr</span><span class="p">(</span><span class="mf">0.5</span><span class="p">,</span> <span class="mi">3</span> <span class="o">*</span> <span class="p">(</span><span class="nx">numTone</span> <span class="o">+</span> <span class="mi">1</span><span class="p">));</span>
<span class="nx">Pan2</span><span class="p">.</span><span class="nx">ar</span>
<span class="p">(</span>
<span class="nx">BLowPass</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="nx">Saw</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="nx">freq</span><span class="p">),</span> <span class="nx">freq</span> <span class="o">*</span> <span class="mi">5</span><span class="p">,</span> <span class="mf">0.5</span><span class="p">),</span>
<span class="nx">rrand</span><span class="p">(</span><span class="o">-</span><span class="mf">0.5</span><span class="p">,</span> <span class="mf">0.5</span><span class="p">),</span>
<span class="nx">numVoices</span><span class="p">.</span><span class="nx">reciprocal</span>
<span class="p">)</span>
<span class="p">}</span><span class="o">!</span><span class="nx">numVoices</span><span class="p">);</span>
<span class="p">}.</span><span class="nx">play</span><span class="p">;</span>
<span class="p">)</span></code></pre></div>
<p>Here is how it sounds with the latest tweaks:</p>
<div class="audio-player"><audio src="http://www.earslap.com/assets/thxdeepsound/sound2.mp3" controls="controls"><p>Sorry, your browser probably doesn't recognise the audio tag somehow...</p></audio></div>
<p>This sounds like a good starting point, so let’s start implementing our sweep, initially in a very crude way. To implement the sweep, we first need to define our final landing pitches for each of the oscillator. This is not very straightforward, but not very hard either. The fundamental tone should be the pitch that is right in between low D and Eb, so the midi pitch for that tone would be 14.5 (0 is C, count up chromatically, I’m skipping the first octave). So we need to map our freq arguments for 30 oscillators from random frequencies between 200Hz and 400Hz to 14.5 and to its octaves. By ear, I’ve chosen to use the first 6 octaves. So our final array of destination frequencies will be:</p>
<div class="highlight"><pre><code class="language-javascript" data-lang="javascript"><span class="p">(</span><span class="nx">numVoices</span><span class="p">.</span><span class="nx">collect</span><span class="p">({</span><span class="o">|</span><span class="nx">nv</span><span class="o">|</span> <span class="p">(</span><span class="nx">nv</span><span class="o">/</span><span class="p">(</span><span class="nx">numVoices</span><span class="o">/</span><span class="mi">6</span><span class="p">)).</span><span class="nx">round</span> <span class="o">*</span> <span class="mi">12</span><span class="p">;</span> <span class="p">})</span> <span class="o">+</span> <span class="mf">14.5</span><span class="p">).</span><span class="nx">midicps</span><span class="p">;</span></code></pre></div>
<p>We’ll be using a sweep that goes from 0 to 1. The random frequencies will be multiplied by (1 – sweep), and the destination frequencies will be multiplied by sweep itself. So when sweep is 0 (beginning) freq will be the random one, when it is 0.5, it will be ((random + destination) / 2), and when it is 1, the freq will be our destination value. Here is our modified code:</p>
<div class="highlight"><pre><code class="language-javascript" data-lang="javascript"><span class="c1">//creating the initial sweep (crude), creating final pitches</span>
<span class="p">(</span>
<span class="p">{</span>
<span class="kd">var</span> <span class="nx">numVoices</span> <span class="o">=</span> <span class="mi">30</span><span class="p">;</span>
<span class="kd">var</span> <span class="nx">fundamentals</span> <span class="o">=</span> <span class="p">({</span><span class="nx">rrand</span><span class="p">(</span><span class="mf">200.0</span><span class="p">,</span> <span class="mf">400.0</span><span class="p">)}</span><span class="o">!</span><span class="nx">numVoices</span><span class="p">).</span><span class="nx">sort</span><span class="p">;</span>
<span class="kd">var</span> <span class="nx">finalPitches</span> <span class="o">=</span> <span class="p">(</span><span class="nx">numVoices</span><span class="p">.</span><span class="nx">collect</span><span class="p">({</span><span class="o">|</span><span class="nx">nv</span><span class="o">|</span> <span class="p">(</span><span class="nx">nv</span><span class="o">/</span><span class="p">(</span><span class="nx">numVoices</span><span class="o">/</span><span class="mi">6</span><span class="p">)).</span><span class="nx">round</span> <span class="o">*</span> <span class="mi">12</span><span class="p">;</span> <span class="p">})</span> <span class="o">+</span> <span class="mf">14.5</span><span class="p">).</span><span class="nx">midicps</span><span class="p">;</span>
<span class="kd">var</span> <span class="nx">sweepEnv</span> <span class="o">=</span> <span class="nx">EnvGen</span><span class="p">.</span><span class="nx">kr</span><span class="p">(</span><span class="nx">Env</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">],</span> <span class="p">[</span><span class="mi">13</span><span class="p">]));</span>
<span class="nx">Mix</span>
<span class="p">({</span><span class="o">|</span><span class="nx">numTone</span><span class="o">|</span>
<span class="kd">var</span> <span class="nx">initRandomFreq</span> <span class="o">=</span> <span class="nx">fundamentals</span><span class="p">[</span><span class="nx">numTone</span><span class="p">]</span> <span class="o">+</span> <span class="nx">LFNoise2</span><span class="p">.</span><span class="nx">kr</span><span class="p">(</span><span class="mf">0.5</span><span class="p">,</span> <span class="mi">3</span> <span class="o">*</span> <span class="p">(</span><span class="nx">numTone</span> <span class="o">+</span> <span class="mi">1</span><span class="p">));</span>
<span class="kd">var</span> <span class="nx">destinationFreq</span> <span class="o">=</span> <span class="nx">finalPitches</span><span class="p">[</span><span class="nx">numTone</span><span class="p">];</span>
<span class="kd">var</span> <span class="nx">freq</span> <span class="o">=</span> <span class="p">((</span><span class="mi">1</span> <span class="o">-</span> <span class="nx">sweepEnv</span><span class="p">)</span> <span class="o">*</span> <span class="nx">initRandomFreq</span><span class="p">)</span> <span class="o">+</span> <span class="p">(</span><span class="nx">sweepEnv</span> <span class="o">*</span> <span class="nx">destinationFreq</span><span class="p">);</span>
<span class="nx">Pan2</span><span class="p">.</span><span class="nx">ar</span>
<span class="p">(</span>
<span class="nx">BLowPass</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="nx">Saw</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="nx">freq</span><span class="p">),</span> <span class="nx">freq</span> <span class="o">*</span> <span class="mi">5</span><span class="p">,</span> <span class="mf">0.5</span><span class="p">),</span>
<span class="nx">rrand</span><span class="p">(</span><span class="o">-</span><span class="mf">0.5</span><span class="p">,</span> <span class="mf">0.5</span><span class="p">),</span>
<span class="nx">numVoices</span><span class="p">.</span><span class="nx">reciprocal</span> <span class="c1">//scale the amplitude of each voice</span>
<span class="p">)</span>
<span class="p">}</span><span class="o">!</span><span class="nx">numVoices</span><span class="p">);</span>
<span class="p">}.</span><span class="nx">play</span><span class="p">;</span>
<span class="p">)</span></code></pre></div>
<p>Here is the sound:</p>
<div class="audio-player"><audio src="http://www.earslap.com/assets/thxdeepsound/sound3.mp3" controls="controls"><p>Sorry, your browser probably doesn't recognise the audio tag somehow...</p></audio></div>
<p>As I said earlier, this is a very crude sweep. It goes linearly from 0 to 1, which is not congruent with the original composition. Also you should have noticed that the final octaves sounds AWFUL because they are tuned to perfect octaves, and fuse into each other, having fundamental-overtone relationships between them. We will fix this by adding random wobbling to the final pitches, just as we did with the initial random pitches, and it will sound much much more organic.</p>
<p>So we should fix the frequency sweep envelope first. The earlier envelope was just for trying the formulas (and the final landing) out. If we observe the original piece, we can see that there is very little change in organization for the first 5-6 seconds. After that there is a fast and exponential sweep that lands the oscillators to the final octave spaced destinations. Here is the envelope I’ve chosen:</p>
<div class="highlight"><pre><code class="language-javascript" data-lang="javascript"><span class="nx">sweepEnv</span> <span class="o">=</span> <span class="nx">EnvGen</span><span class="p">.</span><span class="nx">kr</span><span class="p">(</span><span class="nx">Env</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span> <span class="mf">0.1</span><span class="p">,</span> <span class="mi">1</span><span class="p">],</span> <span class="p">[</span><span class="mi">5</span><span class="p">,</span> <span class="mi">8</span><span class="p">],</span> <span class="p">[</span><span class="mi">2</span><span class="p">,</span> <span class="mi">5</span><span class="p">]));</span></code></pre></div>
<p>It takes 5 seconds to go from 0 to 0.1, and 8 seconds to go from 0.1 to 1. The curvatures for the segments are 2 and 5. We’ll see how that worked out, but we also need to fix the final sound spacings. Just as we did with the random frequencies, we will add random wobbles with LFNoise2 whose range will be proportional to the final frequency of the oscillator. This will make the finale sound much more organic. Here is the modified code:</p>
<div class="highlight"><pre><code class="language-javascript" data-lang="javascript"><span class="c1">//tweaking the envelope, detuning the final chord</span>
<span class="p">(</span>
<span class="p">{</span>
<span class="kd">var</span> <span class="nx">numVoices</span> <span class="o">=</span> <span class="mi">30</span><span class="p">;</span>
<span class="kd">var</span> <span class="nx">fundamentals</span> <span class="o">=</span> <span class="p">({</span><span class="nx">rrand</span><span class="p">(</span><span class="mf">200.0</span><span class="p">,</span> <span class="mf">400.0</span><span class="p">)}</span><span class="o">!</span><span class="nx">numVoices</span><span class="p">).</span><span class="nx">sort</span><span class="p">;</span>
<span class="kd">var</span> <span class="nx">finalPitches</span> <span class="o">=</span> <span class="p">(</span><span class="nx">numVoices</span><span class="p">.</span><span class="nx">collect</span><span class="p">({</span><span class="o">|</span><span class="nx">nv</span><span class="o">|</span> <span class="p">(</span><span class="nx">nv</span><span class="o">/</span><span class="p">(</span><span class="nx">numVoices</span><span class="o">/</span><span class="mi">6</span><span class="p">)).</span><span class="nx">round</span> <span class="o">*</span> <span class="mi">12</span><span class="p">;</span> <span class="p">})</span> <span class="o">+</span> <span class="mf">14.5</span><span class="p">).</span><span class="nx">midicps</span><span class="p">;</span>
<span class="kd">var</span> <span class="nx">sweepEnv</span> <span class="o">=</span> <span class="nx">EnvGen</span><span class="p">.</span><span class="nx">kr</span><span class="p">(</span><span class="nx">Env</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span> <span class="mf">0.1</span><span class="p">,</span> <span class="mi">1</span><span class="p">],</span> <span class="p">[</span><span class="mi">5</span><span class="p">,</span> <span class="mi">8</span><span class="p">],</span> <span class="p">[</span><span class="mi">2</span><span class="p">,</span> <span class="mi">5</span><span class="p">]));</span>
<span class="nx">Mix</span>
<span class="p">({</span><span class="o">|</span><span class="nx">numTone</span><span class="o">|</span>
<span class="kd">var</span> <span class="nx">initRandomFreq</span> <span class="o">=</span> <span class="nx">fundamentals</span><span class="p">[</span><span class="nx">numTone</span><span class="p">]</span> <span class="o">+</span> <span class="nx">LFNoise2</span><span class="p">.</span><span class="nx">kr</span><span class="p">(</span><span class="mf">0.5</span><span class="p">,</span> <span class="mi">3</span> <span class="o">*</span> <span class="p">(</span><span class="nx">numTone</span> <span class="o">+</span> <span class="mi">1</span><span class="p">));</span>
<span class="kd">var</span> <span class="nx">destinationFreq</span> <span class="o">=</span> <span class="nx">finalPitches</span><span class="p">[</span><span class="nx">numTone</span><span class="p">]</span> <span class="o">+</span> <span class="nx">LFNoise2</span><span class="p">.</span><span class="nx">kr</span><span class="p">(</span><span class="mf">0.1</span><span class="p">,</span> <span class="p">(</span><span class="nx">numTone</span> <span class="o">/</span> <span class="mi">4</span><span class="p">));</span>
<span class="kd">var</span> <span class="nx">freq</span> <span class="o">=</span> <span class="p">((</span><span class="mi">1</span> <span class="o">-</span> <span class="nx">sweepEnv</span><span class="p">)</span> <span class="o">*</span> <span class="nx">initRandomFreq</span><span class="p">)</span> <span class="o">+</span> <span class="p">(</span><span class="nx">sweepEnv</span> <span class="o">*</span> <span class="nx">destinationFreq</span><span class="p">);</span>
<span class="nx">Pan2</span><span class="p">.</span><span class="nx">ar</span>
<span class="p">(</span>
<span class="nx">BLowPass</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="nx">Saw</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="nx">freq</span><span class="p">),</span> <span class="nx">freq</span> <span class="o">*</span> <span class="mi">8</span><span class="p">,</span> <span class="mf">0.5</span><span class="p">),</span>
<span class="nx">rrand</span><span class="p">(</span><span class="o">-</span><span class="mf">0.5</span><span class="p">,</span> <span class="mf">0.5</span><span class="p">),</span>
<span class="nx">numVoices</span><span class="p">.</span><span class="nx">reciprocal</span>
<span class="p">)</span>
<span class="p">}</span><span class="o">!</span><span class="nx">numVoices</span><span class="p">);</span>
<span class="p">}.</span><span class="nx">play</span><span class="p">;</span>
<span class="p">)</span></code></pre></div>
<p>Here, I’ve also tweaked the cutoff frequency of the lowpass filter to my liking. I like tweaking stuff, until it alienates me from what I’ve been working on… Anyway. Here is the resulting sound:</p>
<div class="audio-player"><audio src="http://www.earslap.com/assets/thxdeepsound/sound4.mp3" controls="controls"><p>Sorry, your browser probably doesn't recognise the audio tag somehow...</p></audio></div>
<p>I’m not really happy with this envelope either. It needs a longer initialization and faster finish. Or wait… Do I have to have the same envelope for every oscillator? Absolutely not! Each oscillator should have its own envelope with slightly different time and curve values, and I bet it will be more interesting. And the high frequency overtones of the random sawtooth cluster is a bit annoying, so I’m adding a lowpass to the sum, whose cutoff is controlled by a global “outer” envelope that has nothing to do with the envelopes of the oscillators. Here is the modified code:</p>
<div class="highlight"><pre><code class="language-javascript" data-lang="javascript"><span class="c1">//custom envelopes. lowpass at end</span>
<span class="p">(</span>
<span class="p">{</span>
<span class="kd">var</span> <span class="nx">numVoices</span> <span class="o">=</span> <span class="mi">30</span><span class="p">;</span>
<span class="kd">var</span> <span class="nx">fundamentals</span> <span class="o">=</span> <span class="p">({</span><span class="nx">rrand</span><span class="p">(</span><span class="mf">200.0</span><span class="p">,</span> <span class="mf">400.0</span><span class="p">)}</span><span class="o">!</span><span class="nx">numVoices</span><span class="p">).</span><span class="nx">sort</span><span class="p">;</span>
<span class="kd">var</span> <span class="nx">finalPitches</span> <span class="o">=</span> <span class="p">(</span><span class="nx">numVoices</span><span class="p">.</span><span class="nx">collect</span><span class="p">({</span><span class="o">|</span><span class="nx">nv</span><span class="o">|</span> <span class="p">(</span><span class="nx">nv</span><span class="o">/</span><span class="p">(</span><span class="nx">numVoices</span><span class="o">/</span><span class="mi">6</span><span class="p">)).</span><span class="nx">round</span> <span class="o">*</span> <span class="mi">12</span><span class="p">;</span> <span class="p">})</span> <span class="o">+</span> <span class="mf">14.5</span><span class="p">).</span><span class="nx">midicps</span><span class="p">;</span>
<span class="kd">var</span> <span class="nx">outerEnv</span> <span class="o">=</span> <span class="nx">EnvGen</span><span class="p">.</span><span class="nx">kr</span><span class="p">(</span><span class="nx">Env</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span> <span class="mf">0.1</span><span class="p">,</span> <span class="mi">1</span><span class="p">],</span> <span class="p">[</span><span class="mi">8</span><span class="p">,</span> <span class="mi">4</span><span class="p">],</span> <span class="p">[</span><span class="mi">2</span><span class="p">,</span> <span class="mi">4</span><span class="p">]));</span>
<span class="kd">var</span> <span class="nx">snd</span> <span class="o">=</span> <span class="nx">Mix</span>
<span class="p">({</span><span class="o">|</span><span class="nx">numTone</span><span class="o">|</span>
<span class="kd">var</span> <span class="nx">initRandomFreq</span> <span class="o">=</span> <span class="nx">fundamentals</span><span class="p">[</span><span class="nx">numTone</span><span class="p">]</span> <span class="o">+</span> <span class="nx">LFNoise2</span><span class="p">.</span><span class="nx">kr</span><span class="p">(</span><span class="mf">0.5</span><span class="p">,</span> <span class="mi">3</span> <span class="o">*</span> <span class="p">(</span><span class="nx">numTone</span> <span class="o">+</span> <span class="mi">1</span><span class="p">));</span>
<span class="kd">var</span> <span class="nx">destinationFreq</span> <span class="o">=</span> <span class="nx">finalPitches</span><span class="p">[</span><span class="nx">numTone</span><span class="p">]</span> <span class="o">+</span> <span class="nx">LFNoise2</span><span class="p">.</span><span class="nx">kr</span><span class="p">(</span><span class="mf">0.1</span><span class="p">,</span> <span class="p">(</span><span class="nx">numTone</span> <span class="o">/</span> <span class="mi">4</span><span class="p">));</span>
<span class="kd">var</span> <span class="nx">sweepEnv</span> <span class="o">=</span>
<span class="nx">EnvGen</span><span class="p">.</span><span class="nx">kr</span><span class="p">(</span>
<span class="nx">Env</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span> <span class="nx">rrand</span><span class="p">(</span><span class="mf">0.1</span><span class="p">,</span> <span class="mf">0.2</span><span class="p">),</span> <span class="mi">1</span><span class="p">],</span> <span class="p">[</span><span class="nx">rrand</span><span class="p">(</span><span class="mf">5.0</span><span class="p">,</span> <span class="mi">6</span><span class="p">),</span> <span class="nx">rrand</span><span class="p">(</span><span class="mf">8.0</span><span class="p">,</span> <span class="mi">9</span><span class="p">)],</span>
<span class="p">[</span><span class="nx">rrand</span><span class="p">(</span><span class="mf">2.0</span><span class="p">,</span> <span class="mf">3.0</span><span class="p">),</span> <span class="nx">rrand</span><span class="p">(</span><span class="mf">4.0</span><span class="p">,</span> <span class="mf">5.0</span><span class="p">)]));</span>
<span class="kd">var</span> <span class="nx">freq</span> <span class="o">=</span> <span class="p">((</span><span class="mi">1</span> <span class="o">-</span> <span class="nx">sweepEnv</span><span class="p">)</span> <span class="o">*</span> <span class="nx">initRandomFreq</span><span class="p">)</span> <span class="o">+</span> <span class="p">(</span><span class="nx">sweepEnv</span> <span class="o">*</span> <span class="nx">destinationFreq</span><span class="p">);</span>
<span class="nx">Pan2</span><span class="p">.</span><span class="nx">ar</span>
<span class="p">(</span>
<span class="nx">BLowPass</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="nx">Saw</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="nx">freq</span><span class="p">),</span> <span class="nx">freq</span> <span class="o">*</span> <span class="mi">8</span><span class="p">,</span> <span class="mf">0.5</span><span class="p">),</span>
<span class="nx">rrand</span><span class="p">(</span><span class="o">-</span><span class="mf">0.5</span><span class="p">,</span> <span class="mf">0.5</span><span class="p">),</span>
<span class="nx">numVoices</span><span class="p">.</span><span class="nx">reciprocal</span>
<span class="p">)</span>
<span class="p">}</span><span class="o">!</span><span class="nx">numVoices</span><span class="p">);</span>
<span class="nx">BLowPass</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="nx">snd</span><span class="p">,</span> <span class="mi">2000</span> <span class="o">+</span> <span class="p">(</span><span class="nx">outerEnv</span> <span class="o">*</span> <span class="mi">18000</span><span class="p">),</span> <span class="mf">0.5</span><span class="p">);</span>
<span class="p">}.</span><span class="nx">play</span><span class="p">;</span>
<span class="p">)</span></code></pre></div>
<p>The slightly out of phase envelopes rendered the sweep slightly more interesting. Lowpass at 2000Hz at the beginning helps to tame the initial cluster. Here is what it sounds like:</p>
<div class="audio-player"><audio src="http://www.earslap.com/assets/thxdeepsound/sound5.mp3" controls="controls"><p>Sorry, your browser probably doesn't recognise the audio tag somehow...</p></audio></div>
<p>I have one more thing that will make the process sound more interesting. You remember we’ve sorted the random oscillators at the beginning right? Well we can now reverse-sort them and make sure oscillators running in higher random frequencies will end up in bottom voices after the crescendo and vice versa. This will add more “movement” to the crescendo and is quite congruent with the way the original piece is structured. I’m not sure if Dr. Moorer programmed it specifically in this way, but at least, the chosen recording demonstrates this process and it sounds cool, be it a random product of the generative process itself, or a compositional choice (oh, did I say that? If the process covers it, it IS a choice… or is it?). So I’ll reverse the sorted values and the way we structured our code will make sure that the higher pitched sawtooths will end up in the lower voices in the finale and vice versa.</p>
<p>Another thing: We need a louder bass. In the way it is now, all voices have equal amplitude. I want to have the lower voices to have slightly higher amplitude and decay proportionally as the frequency goes up. So I’ll change the mul argument of Pan2 to take this into account. I’ll re-tweak the cutoff frequencies of the lowpass filters governing the individual oscillators. And I am going to add a global amplitude scaling envelope that will fade the piece in, and fade out when the piece ends, and free the synth from scserver. Also some more numeric tweaks here and there, here is our final code:</p>
<div class="highlight"><pre><code class="language-javascript" data-lang="javascript"><span class="c1">//inverting init sort, louder bass, final volume envelope, some little tweaks</span>
<span class="p">(</span>
<span class="p">{</span>
<span class="kd">var</span> <span class="nx">numVoices</span> <span class="o">=</span> <span class="mi">30</span><span class="p">;</span>
<span class="kd">var</span> <span class="nx">fundamentals</span> <span class="o">=</span> <span class="p">({</span><span class="nx">rrand</span><span class="p">(</span><span class="mf">200.0</span><span class="p">,</span> <span class="mf">400.0</span><span class="p">)}</span><span class="o">!</span><span class="nx">numVoices</span><span class="p">).</span><span class="nx">sort</span><span class="p">.</span><span class="nx">reverse</span><span class="p">;</span>
<span class="kd">var</span> <span class="nx">finalPitches</span> <span class="o">=</span> <span class="p">(</span><span class="nx">numVoices</span><span class="p">.</span><span class="nx">collect</span><span class="p">({</span><span class="o">|</span><span class="nx">nv</span><span class="o">|</span> <span class="p">(</span><span class="nx">nv</span><span class="o">/</span><span class="p">(</span><span class="nx">numVoices</span><span class="o">/</span><span class="mi">6</span><span class="p">)).</span><span class="nx">round</span> <span class="o">*</span> <span class="mi">12</span><span class="p">;</span> <span class="p">})</span> <span class="o">+</span> <span class="mf">14.5</span><span class="p">).</span><span class="nx">midicps</span><span class="p">;</span>
<span class="kd">var</span> <span class="nx">outerEnv</span> <span class="o">=</span> <span class="nx">EnvGen</span><span class="p">.</span><span class="nx">kr</span><span class="p">(</span><span class="nx">Env</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span> <span class="mf">0.1</span><span class="p">,</span> <span class="mi">1</span><span class="p">],</span> <span class="p">[</span><span class="mi">8</span><span class="p">,</span> <span class="mi">4</span><span class="p">],</span> <span class="p">[</span><span class="mi">2</span><span class="p">,</span> <span class="mi">4</span><span class="p">]));</span>
<span class="kd">var</span> <span class="nx">ampEnvelope</span> <span class="o">=</span> <span class="nx">EnvGen</span><span class="p">.</span><span class="nx">kr</span><span class="p">(</span><span class="nx">Env</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">0</span><span class="p">],</span> <span class="p">[</span><span class="mi">3</span><span class="p">,</span> <span class="mi">21</span><span class="p">,</span> <span class="mi">3</span><span class="p">],</span> <span class="p">[</span><span class="mi">2</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="o">-</span><span class="mi">4</span><span class="p">]),</span> <span class="nx">doneAction</span><span class="o">:</span> <span class="mi">2</span><span class="p">);</span>
<span class="kd">var</span> <span class="nx">snd</span> <span class="o">=</span> <span class="nx">Mix</span>
<span class="p">({</span><span class="o">|</span><span class="nx">numTone</span><span class="o">|</span>
<span class="kd">var</span> <span class="nx">initRandomFreq</span> <span class="o">=</span> <span class="nx">fundamentals</span><span class="p">[</span><span class="nx">numTone</span><span class="p">]</span> <span class="o">+</span> <span class="nx">LFNoise2</span><span class="p">.</span><span class="nx">kr</span><span class="p">(</span><span class="mf">0.5</span><span class="p">,</span> <span class="mi">6</span> <span class="o">*</span> <span class="p">(</span><span class="nx">numVoices</span> <span class="o">-</span> <span class="p">(</span><span class="nx">numTone</span> <span class="o">+</span> <span class="mi">1</span><span class="p">)));</span>
<span class="kd">var</span> <span class="nx">destinationFreq</span> <span class="o">=</span> <span class="nx">finalPitches</span><span class="p">[</span><span class="nx">numTone</span><span class="p">]</span> <span class="o">+</span> <span class="nx">LFNoise2</span><span class="p">.</span><span class="nx">kr</span><span class="p">(</span><span class="mf">0.1</span><span class="p">,</span> <span class="p">(</span><span class="nx">numTone</span> <span class="o">/</span> <span class="mi">3</span><span class="p">));</span>
<span class="kd">var</span> <span class="nx">sweepEnv</span> <span class="o">=</span>
<span class="nx">EnvGen</span><span class="p">.</span><span class="nx">kr</span><span class="p">(</span>
<span class="nx">Env</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span> <span class="nx">rrand</span><span class="p">(</span><span class="mf">0.1</span><span class="p">,</span> <span class="mf">0.2</span><span class="p">),</span> <span class="mi">1</span><span class="p">],</span> <span class="p">[</span><span class="nx">rrand</span><span class="p">(</span><span class="mf">5.5</span><span class="p">,</span> <span class="mi">6</span><span class="p">),</span> <span class="nx">rrand</span><span class="p">(</span><span class="mf">8.5</span><span class="p">,</span> <span class="mi">9</span><span class="p">)],</span>
<span class="p">[</span><span class="nx">rrand</span><span class="p">(</span><span class="mf">2.0</span><span class="p">,</span> <span class="mf">3.0</span><span class="p">),</span> <span class="nx">rrand</span><span class="p">(</span><span class="mf">4.0</span><span class="p">,</span> <span class="mf">5.0</span><span class="p">)]));</span>
<span class="kd">var</span> <span class="nx">freq</span> <span class="o">=</span> <span class="p">((</span><span class="mi">1</span> <span class="o">-</span> <span class="nx">sweepEnv</span><span class="p">)</span> <span class="o">*</span> <span class="nx">initRandomFreq</span><span class="p">)</span> <span class="o">+</span> <span class="p">(</span><span class="nx">sweepEnv</span> <span class="o">*</span> <span class="nx">destinationFreq</span><span class="p">);</span>
<span class="nx">Pan2</span><span class="p">.</span><span class="nx">ar</span>
<span class="p">(</span>
<span class="nx">BLowPass</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="nx">Saw</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="nx">freq</span><span class="p">),</span> <span class="nx">freq</span> <span class="o">*</span> <span class="mi">6</span><span class="p">,</span> <span class="mf">0.6</span><span class="p">),</span>
<span class="nx">rrand</span><span class="p">(</span><span class="o">-</span><span class="mf">0.5</span><span class="p">,</span> <span class="mf">0.5</span><span class="p">),</span>
<span class="p">(</span><span class="mi">1</span> <span class="o">-</span> <span class="p">(</span><span class="mi">1</span><span class="o">/</span><span class="p">(</span><span class="nx">numTone</span> <span class="o">+</span> <span class="mi">1</span><span class="p">)))</span> <span class="o">*</span> <span class="mf">1.5</span>
<span class="p">)</span> <span class="o">/</span> <span class="nx">numVoices</span>
<span class="p">}</span><span class="o">!</span><span class="nx">numVoices</span><span class="p">);</span>
<span class="nx">Limiter</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="nx">BLowPass</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="nx">snd</span><span class="p">,</span> <span class="mi">2000</span> <span class="o">+</span> <span class="p">(</span><span class="nx">outerEnv</span> <span class="o">*</span> <span class="mi">18000</span><span class="p">),</span> <span class="mf">0.5</span><span class="p">,</span> <span class="p">(</span><span class="mi">2</span> <span class="o">+</span> <span class="nx">outerEnv</span><span class="p">)</span> <span class="o">*</span> <span class="nx">ampEnvelope</span><span class="p">));</span>
<span class="p">}.</span><span class="nx">play</span><span class="p">;</span>
<span class="p">)</span></code></pre></div>
<p>And here is the final recording of the piece:</p>
<div class="audio-player"><audio src="http://www.earslap.com/assets/thxdeepsound/sound6.mp3" controls="controls"><p>Sorry, your browser probably doesn't recognise the audio tag somehow...</p></audio></div>
<p>You may want to compare it with this original one:</p>
<p><a href="http://www.uspto.gov/go/kids/soundex/74309951.mp3">http://www.uspto.gov/go/kids/soundex/74309951.mp3</a></p>
<p>So this is my rendition. Of course it can be further tweaked to death, envelopes, frequencies, distribution, everything… Nevertheless, I think mine is a decent attempt at keeping the legacy alive. And I’d love to hear your comments and/or hear your shots at interpreting this piece.</p>
<p>——————–</p>
<p>Oh and here is one more thing I did for fun. You know, I told you about how it took 20000 lines of C code to generate the original piece. I’m pretty sure Dr. Moorer had to create almost everything by hand so that is not very awkward. But you know, we’ve been sctweeting for some time, trying to fill stuff into 140 characters of code. So for the fun of it, I tried to replicate the essential elements of the composition in 140 characters of code. I think it still sounds cool, here is the code (this one uses an F/E fundamental):</p>
<div class="highlight"><pre><code class="language-javascript" data-lang="javascript"><span class="nx">play</span><span class="p">{</span><span class="nx">Mix</span><span class="p">({</span><span class="o">|</span><span class="nx">k</span><span class="o">|</span><span class="nx">k</span><span class="o">=</span><span class="nx">k</span><span class="o">+</span><span class="mi">1</span><span class="o">/</span><span class="mi">2</span><span class="p">;</span><span class="mi">2</span><span class="o">/</span><span class="nx">k</span><span class="o">*</span><span class="nx">Mix</span><span class="p">({</span><span class="o">|</span><span class="nx">i</span><span class="o">|</span><span class="nx">i</span><span class="o">=</span><span class="nx">i</span><span class="o">+</span><span class="mi">1</span><span class="p">;</span><span class="nx">Blip</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="nx">i</span><span class="o">*</span><span class="nx">XLine</span><span class="p">.</span><span class="nx">kr</span><span class="p">(</span><span class="nx">rand</span><span class="p">(</span><span class="mi">2</span><span class="nx">e2</span><span class="p">,</span><span class="mi">4</span><span class="nx">e2</span><span class="p">),</span><span class="mi">87</span><span class="o">+</span><span class="nx">LFNoise2</span><span class="p">.</span><span class="nx">kr</span><span class="p">(</span><span class="mi">2</span><span class="p">)</span><span class="o">*</span><span class="nx">k</span><span class="p">,</span><span class="mi">15</span><span class="p">),</span><span class="mi">2</span><span class="p">,</span><span class="mi">1</span><span class="o">/</span><span class="p">(</span><span class="nx">i</span><span class="o">/</span><span class="nx">a</span><span class="o">=</span><span class="nx">XLine</span><span class="p">.</span><span class="nx">kr</span><span class="p">(</span><span class="mf">0.3</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">9</span><span class="p">))</span><span class="o">/</span><span class="mi">9</span><span class="p">)}</span><span class="o">!</span><span class="mi">9</span><span class="p">)}</span><span class="o">!</span><span class="mi">40</span><span class="p">)</span><span class="o">!</span><span class="mi">2</span><span class="o">*</span><span class="nx">a</span><span class="p">}</span></code></pre></div>
<p>And here is the sound this version generates:</p>
<div class="audio-player"><audio src="http://www.earslap.com/assets/thxdeepsound/soundtweet.mp3" controls="controls"><p>Sorry, your browser probably doesn't recognise the audio tag somehow...</p></audio></div>
<p>All the code in this page is in this document for you to experiment: <a href="http://www.earslap.com/assets/thxdeepsound/deepnote_tutorial.sc">- get from here –</a></p>
<p>Happy sweeping…</p>
<div class="message bb-attrib-box">
<p>This post’s social cover image is derivative work from <a href="https://www.flickr.com/photos/eflon/4860063267">this</a> <a href="http://creativecommons.org/licenses/by/2.0/">CC licensed</a> image by <a href="http://www.flickr.com/people/eflon">eflon</a>.</p>
</div>
Combination Tones and the Non-Linearities of the Human Ear
2009-04-16T04:01:01+03:00
http://www.earslap.com//article/combination-tones-and-the-nonlinearities-of-the-human-ear
<p>Last week, one of my students brought me a composition he made, which was using combination tones in an interesting compositional context, and that gave me a nudge to do some experimenting on the issue, here I’ll share what I’ve found interesting.</p>
<p>This is a psychoacoustic phenomenon in which, when at least two tones with close fundamental frequencies are sounded together, another tone whose frequency is the sum or difference of the original two tones is heard. This “ghost” tone is purely made up by the human ear and brain. That is, if you inspect the spectrum of the two tones with a spectrogram, this ghost tone simply isn’t there. However, this tone is usually quite faint (unless the amplitude of the original tones are high) so one needs to concentrate to get it, but it’s not very hard either.</p>
<p>Here is an example, this is a 200 Hz sine tone:</p>
<div class="highlight"><pre><code class="language-javascript" data-lang="javascript"><span class="p">{</span> <span class="nx">SinOsc</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="mi">200</span><span class="p">).</span><span class="nx">dup</span> <span class="p">}.</span><span class="nx">play</span></code></pre></div>
<div class="audio-player"><audio src="http://www.earslap.com/assets/combitones/combiex1.mp3" controls="controls"><p>Sorry, your browser probably doesn't recognise the audio tag somehow...</p></audio></div>
<p>And here it is in the spectrogram:</p>
<p><a href="http://www.earslap.com/assets/combitones/0.png"><img src="http://www.earslap.com/assets/combitones/0.png" alt="" /></a></p>
<p>And here is 1000Hz and 1200Hz sine tones mixed together. The difference of their frequencies (200Hz) is also audible, if you pay attention (works best with headphones, but also audible with speakers. If you can’t hear it, turn the volume up):</p>
<div class="highlight"><pre><code class="language-javascript" data-lang="javascript"><span class="p">{</span> <span class="nx">SinOsc</span><span class="p">.</span><span class="nx">ar</span><span class="p">([</span><span class="mi">1000</span><span class="p">,</span> <span class="mi">1200</span><span class="p">]).</span><span class="nx">mean</span><span class="p">.</span><span class="nx">dup</span> <span class="p">}.</span><span class="nx">play</span></code></pre></div>
<div class="audio-player"><audio src="http://www.earslap.com/assets/combitones/combiex2.mp3" controls="controls"><p>Sorry, your browser probably doesn't recognise the audio tag somehow...</p></audio></div>
<p>If you can hear it, great. Here is the spectral decomposition:</p>
<p><a href="http://www.earslap.com/assets/combitones/1.png"><img src="http://www.earslap.com/assets/combitones/1.png" alt="" /></a></p>
<p>Notice that there is nothing below 1000Hz here, the 200Hz tone you’ve just heard isn’t really there. As I said, this is a well known phenomenon in which the non-linearities of the inner ear causes this difference tone to be heard. Wikipedia states (although without citing a source) that, giving the two tones with different frequencies separately to each ear with headphones also create the effect but I was unable to experience this that way.</p>
<p>Here comes the fun part of this post, you can play several tricks with pure tones and use this ghost tone in creative ways. Here is a glissando of two pure sine tones. One oscillator starts from 4000Hz and goes down to 1000Hz, the other starts from 4200Hz and ends at 1200Hz. So the difference of their frequencies throughout is always 200Hz, so you should hear a constant (although sometimes fluctuating) 200Hz tone while the high frequencies are doing their gliss:</p>
<div class="highlight"><pre><code class="language-javascript" data-lang="javascript"><span class="p">(</span>
<span class="p">{</span>
<span class="kd">var</span> <span class="nx">freqSweep</span> <span class="o">=</span> <span class="nx">Line</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="mi">4000</span><span class="p">,</span> <span class="mi">1000</span><span class="p">,</span> <span class="mi">10</span><span class="p">);</span>
<span class="nx">SinOsc</span><span class="p">.</span><span class="nx">ar</span><span class="p">([</span><span class="nx">freqSweep</span><span class="p">,</span> <span class="nx">freqSweep</span> <span class="o">+</span> <span class="mi">200</span><span class="p">]).</span><span class="nx">mean</span><span class="o">!</span><span class="mi">2</span><span class="p">;</span>
<span class="p">}.</span><span class="nx">play</span>
<span class="p">)</span></code></pre></div>
<div class="audio-player"><audio src="http://www.earslap.com/assets/combitones/combiex3.mp3" controls="controls"><p>Sorry, your browser probably doesn't recognise the audio tag somehow...</p></audio></div>
<p>Look at the spectral decomposition, and the 200Hz tone isn’t there again:</p>
<p><a href="http://www.earslap.com/assets/combitones/2.png"><img src="http://www.earslap.com/assets/combitones/2.png" alt="" /></a></p>
<p>You can even go to the extremes! Here, the frequencies of the sine tones change randomly 8 times a second, but the difference of 200Hz is preserved, so you are still able to hear the constant 200Hz (not very pleasant to listen to but hey, it’s a cool effect!):</p>
<div class="highlight"><pre><code class="language-javascript" data-lang="javascript"><span class="p">(</span>
<span class="p">{</span>
<span class="kd">var</span> <span class="nx">baseFreq</span> <span class="o">=</span> <span class="nx">TRand</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="mi">1000</span><span class="p">,</span> <span class="mi">2000</span><span class="p">,</span> <span class="nx">Impulse</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="mi">8</span><span class="p">)).</span><span class="nx">lag</span><span class="p">(</span><span class="mf">0.01</span><span class="p">);</span>
<span class="nx">SinOsc</span><span class="p">.</span><span class="nx">ar</span><span class="p">([</span><span class="nx">baseFreq</span><span class="p">,</span> <span class="nx">baseFreq</span> <span class="o">+</span> <span class="mi">200</span><span class="p">]).</span><span class="nx">mean</span><span class="o">!</span><span class="mi">2</span><span class="p">;</span>
<span class="p">}.</span><span class="nx">play</span>
<span class="p">)</span></code></pre></div>
<div class="audio-player"><audio src="http://www.earslap.com/assets/combitones/combiex4.mp3" controls="controls"><p>Sorry, your browser probably doesn't recognise the audio tag somehow...</p></audio></div>
<p>Here they are changing 32 times a second randomly, but preserving the 200Hz separation, this is harder to hear, and fluctuates a bit, but the effect is there:</p>
<div class="highlight"><pre><code class="language-javascript" data-lang="javascript"><span class="p">(</span>
<span class="p">{</span><span class="c1">//best through headphones</span>
<span class="kd">var</span> <span class="nx">baseFreq</span> <span class="o">=</span> <span class="nx">TRand</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="mi">1000</span><span class="p">,</span> <span class="mi">2000</span><span class="p">,</span> <span class="nx">Impulse</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="mi">32</span><span class="p">)).</span><span class="nx">lag</span><span class="p">(</span><span class="mf">0.01</span><span class="p">);</span>
<span class="nx">SinOsc</span><span class="p">.</span><span class="nx">ar</span><span class="p">([</span><span class="nx">baseFreq</span><span class="p">,</span> <span class="nx">baseFreq</span> <span class="o">+</span> <span class="mi">200</span><span class="p">]).</span><span class="nx">mean</span><span class="o">!</span><span class="mi">2</span><span class="p">;</span>
<span class="p">}.</span><span class="nx">play</span>
<span class="p">)</span></code></pre></div>
<div class="audio-player"><audio src="http://www.earslap.com/assets/combitones/combiex5.mp3" controls="controls"><p>Sorry, your browser probably doesn't recognise the audio tag somehow...</p></audio></div>
<p>And here is the grand finale of combination tones for this post. In the previous examples, we always kept a 200Hz separation between two oscillators and heard a constant difference tone. What if we craft the separation frequency in a way that the difference frequency plays a melody while the upper frequencies change randomly, even at 32 times a second? Can you hear the “bottom” melody here? Any guesses what it might be?</p>
<div class="highlight"><pre><code class="language-javascript" data-lang="javascript"><span class="p">(</span>
<span class="p">{</span>
<span class="kd">var</span> <span class="nx">times</span> <span class="o">=</span> <span class="nx">Dseq</span><span class="p">(((</span><span class="mi">1</span><span class="o">!</span><span class="mi">12</span> <span class="o">++</span> <span class="p">[</span><span class="mf">1.5</span><span class="p">,</span> <span class="mf">0.5</span><span class="p">,</span> <span class="mi">2</span><span class="p">])</span><span class="o">!</span><span class="mi">2</span><span class="p">).</span><span class="nx">flat</span> <span class="o">/</span> <span class="mi">2</span><span class="p">,</span> <span class="mi">1</span><span class="p">);</span>
<span class="kd">var</span> <span class="nx">pitchBase</span> <span class="o">=</span> <span class="p">[</span><span class="mi">55</span><span class="p">,</span> <span class="mi">55</span><span class="p">,</span> <span class="mi">56</span><span class="p">,</span> <span class="mi">58</span><span class="p">,</span> <span class="mi">58</span><span class="p">,</span> <span class="mi">56</span><span class="p">,</span> <span class="mi">55</span><span class="p">,</span> <span class="mi">53</span><span class="p">,</span> <span class="mi">51</span><span class="p">,</span> <span class="mi">51</span><span class="p">,</span> <span class="mi">53</span><span class="p">,</span> <span class="mi">55</span><span class="p">];</span>
<span class="kd">var</span> <span class="nx">pitches</span> <span class="o">=</span> <span class="nx">Dseq</span><span class="p">((</span><span class="nx">pitchBase</span> <span class="o">++</span> <span class="p">[</span><span class="mi">55</span><span class="p">,</span> <span class="mi">53</span><span class="p">,</span> <span class="mi">53</span><span class="p">]</span> <span class="o">++</span> <span class="nx">pitchBase</span> <span class="o">++</span> <span class="p">[</span><span class="mi">53</span><span class="p">,</span> <span class="mi">51</span><span class="p">,</span> <span class="mi">51</span><span class="p">]).</span><span class="nx">midicps</span><span class="p">,</span> <span class="mi">1</span><span class="p">);</span>
<span class="kd">var</span> <span class="nx">freqs</span> <span class="o">=</span> <span class="nx">Duty</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="nx">times</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="nx">pitches</span><span class="p">,</span> <span class="nx">doneAction</span><span class="o">:</span> <span class="mi">2</span><span class="p">);</span>
<span class="kd">var</span> <span class="nx">baseRandFreq</span> <span class="o">=</span> <span class="nx">TRand</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="mi">1000</span><span class="p">,</span> <span class="mi">2000</span><span class="p">,</span> <span class="nx">Impulse</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="mi">32</span><span class="p">)).</span><span class="nx">lag</span><span class="p">(</span><span class="mf">0.01</span><span class="p">);</span>
<span class="nx">SinOsc</span><span class="p">.</span><span class="nx">ar</span><span class="p">([</span><span class="nx">baseRandFreq</span><span class="p">,</span> <span class="nx">baseRandFreq</span> <span class="o">+</span> <span class="nx">freqs</span><span class="p">]).</span><span class="nx">mean</span><span class="o">!</span><span class="mi">2</span><span class="p">;</span>
<span class="p">}.</span><span class="nx">play</span><span class="p">;</span>
<span class="p">)</span></code></pre></div>
<div class="audio-player"><audio src="http://www.earslap.com/assets/combitones/combiex6.mp3" controls="controls"><p>Sorry, your browser probably doesn't recognise the audio tag somehow...</p></audio></div>
<p>Not the most beautiful thing to listen to, but try to hear the ghost melody at the bottom. Your ears and brain are playing tricks on you, and observing it is worth the suffering, in my opinion. Use the comment box if you can hear it and want to hazard a guess…</p>
<div class="message bb-attrib-box">
<p>This post’s social cover image is derivative work from <a href="http://www.flickr.com/photos/bdu/381460970/">this</a> <a href="http://creativecommons.org/licenses/by-sa/2.0/">CC-SA licensed</a> image by <a href="http://www.flickr.com/people/bdu/">bdu</a>.</p>
</div>
SCTweeting!
2009-04-10T04:01:01+03:00
http://www.earslap.com//article/sctweeting
<p>Here is a cool concept.</p>
<p>I think many of you are familiar with the <a href="http://www.twitter.com/">Twitter</a> service; even if you aren’t using it yet, you probably are aware of the fact that there are millions of people all around the globe tweeting like frenzy using this service (defined by some as the SMS of <a href="http://en.wikipedia.org/wiki/Internets">the Internets</a>) as it’s the 3rd most used social networking site standing right after the <a href="http://www.facebook.com/">Facebook</a> and <a href="http://www.myspace.com/">MySpace</a>.</p>
<p>It’s a micro-blogging site where registered users can update their status in a minimal fashion, for the people following them to see, and where people can stay up to date about the status of their buddies, work related info, status of various projects etc. by simply following the account providing the relevant info. Some people use it to keep in touch with busy friends without much hassle in their busy day schedules, and it is also being used for following status of projects provided that they use Twitter to announce their stuff. I’m pretty sure there are many other active purposes of usage, people are inventive.</p>
<p>A cool (if not coolest) purpose of usage (for me anyway) is, I think, germinated from the <a href="http://supercollider.sourceforge.net/">SuperCollider</a> codes <a href="http://www.mcld.co.uk/">Dan Stowell</a> started to post on his own Twitter blog as status messages, a concept which later evolved into a SCTweeting collaboration as the act of posting little code chunks (called “SCTwits”) that make cool sounds is found interesting by other SuperCollider users. The fun comes from the limitation Twitter employs: Your posts can not be longer than 140 characters! This limitation tickles the creative mind and influences the creative process in a way that one strives to find the balance between simplicity, humor, coolness and of course the subjective beauty of the end result, but in the end, it’s all limited by 140 characters.</p>
<p>SuperCollider is a very expressive language with its generic tool set and various syntax shortcuts, yet its still surprising to see what people can fit into 140 characters of code. SCTweeting people typically follow each other on Twitter, and the experience is fun, inspiring and educating.</p>
<p>Here is the link for the mailing-list topic where it all started:</p>
<p><a href="http://www.nabble.com/sctwitt-td22745438.html">http://www.nabble.com/sctwitt-td22745438.html</a></p>
<p>You can find the blog addresses of some of the contributors to follow under the topic in the mailing list. And here is mine if you want to follow me:</p>
<p><a href="http://www.twitter.com/earslap">http://www.twitter.com/earslap</a></p>
<p>This is so fun and brain tickling that I want to keep on doing it forever (unless everyone quits and I start to feel retarded for posting stuff there for some long time by myself without anyone following, of course). Here are my contributions so far; with the resulting sounds embedded in case if you don’t have a machine with SuperCollider around, or if you reached to this page while searching for what SuperCollider is all about. This is an expressive language!</p>
<p>This one sounds like a bad trombone being tested by an incompetent player:</p>
<div class="highlight"><pre><code class="language-javascript" data-lang="javascript"><span class="p">{</span><span class="nx">a</span><span class="o">=</span><span class="nx">LocalIn</span><span class="p">.</span><span class="nx">ar</span><span class="p">;</span><span class="nx">LocalOut</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="nx">Mix</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="nx">x</span><span class="o">=</span><span class="nx">SinOsc</span><span class="p">.</span><span class="nx">ar</span><span class="p">((</span><span class="nx">Decay</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="nx">Impulse</span><span class="p">.</span><span class="nx">ar</span><span class="p">([</span><span class="mi">4</span><span class="p">,</span><span class="mf">4.005</span><span class="p">]),</span><span class="mi">1</span><span class="nx">e3</span><span class="o">*</span><span class="nx">a</span><span class="p">.</span><span class="nx">abs</span><span class="p">)</span><span class="o">*</span><span class="mi">50</span><span class="p">),</span><span class="nx">a</span><span class="p">).</span><span class="nx">distort</span><span class="p">));</span><span class="nx">x</span><span class="p">;}.</span><span class="nx">play</span><span class="p">;</span><span class="c1">//tryingharder_to_noavail</span></code></pre></div>
<div class="audio-player"><audio src="http://www.earslap.com/assets/sctweets/sctwit1.mp3" controls="controls"><p>Sorry, your browser probably doesn't recognise the audio tag somehow...</p></audio></div>
<p>This one is glitchy:</p>
<div class="highlight"><pre><code class="language-javascript" data-lang="javascript"><span class="p">{</span><span class="nx">f</span><span class="o">=</span><span class="nx">LocalIn</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="mi">2</span><span class="p">).</span><span class="nx">tanh</span><span class="p">;</span><span class="nx">k</span><span class="o">=</span><span class="nx">Latch</span><span class="p">.</span><span class="nx">kr</span><span class="p">(</span><span class="nx">f</span><span class="p">[</span><span class="mi">0</span><span class="p">].</span><span class="nx">abs</span><span class="p">,</span><span class="nx">Impulse</span><span class="p">.</span><span class="nx">kr</span><span class="p">(</span><span class="mf">0.5</span><span class="p">));</span><span class="nx">LocalOut</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="nx">f</span><span class="o">+</span><span class="nx">AllpassN</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="nx">Pulse</span><span class="p">.</span><span class="nx">ar</span><span class="p">([</span><span class="mi">2</span><span class="p">,</span><span class="mi">3</span><span class="p">],</span><span class="nx">k</span><span class="o">*</span><span class="mf">0.01</span><span class="o">+</span><span class="mi">1</span><span class="nx">e</span><span class="o">-</span><span class="mi">6</span><span class="p">,</span><span class="mf">0.9</span><span class="p">),</span><span class="mi">1</span><span class="p">,</span><span class="nx">k</span><span class="o">*</span><span class="mf">0.3</span><span class="p">,</span><span class="mi">100</span><span class="o">*</span><span class="nx">k</span><span class="p">));</span><span class="nx">f</span><span class="p">}.</span><span class="nx">play</span></code></pre></div>
<div class="audio-player"><audio src="http://www.earslap.com/assets/sctweets/sctwit2.mp3" controls="controls"><p>Sorry, your browser probably doesn't recognise the audio tag somehow...</p></audio></div>
<p>Switching scenes:</p>
<div class="highlight"><pre><code class="language-javascript" data-lang="javascript"><span class="nx">play</span><span class="p">{</span><span class="nx">f</span><span class="o">=</span><span class="nx">LocalIn</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="mi">2</span><span class="p">).</span><span class="nx">tanh</span><span class="p">;</span><span class="nx">k</span><span class="o">=</span><span class="nx">Latch</span><span class="p">.</span><span class="nx">kr</span><span class="p">(</span><span class="nx">f</span><span class="p">[</span><span class="mi">0</span><span class="p">].</span><span class="nx">abs</span><span class="p">,</span><span class="nx">Impulse</span><span class="p">.</span><span class="nx">kr</span><span class="p">(</span><span class="mi">1</span><span class="o">/</span><span class="mi">4</span><span class="p">));</span><span class="nx">LocalOut</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="nx">f</span><span class="o">+</span><span class="nx">CombC</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="nx">Blip</span><span class="p">.</span><span class="nx">ar</span><span class="p">([</span><span class="mi">4</span><span class="p">,</span><span class="mi">6</span><span class="p">],</span><span class="mi">100</span><span class="o">*</span><span class="nx">k</span><span class="o">+</span><span class="mi">50</span><span class="p">,</span><span class="mf">0.9</span><span class="p">),</span><span class="mi">1</span><span class="p">,</span><span class="nx">k</span><span class="o">*</span><span class="mf">0.3</span><span class="p">,</span><span class="mi">50</span><span class="o">*</span><span class="nx">f</span><span class="p">));</span><span class="nx">f</span><span class="p">}</span><span class="c1">//44.1kHz</span></code></pre></div>
<div class="audio-player"><audio src="http://www.earslap.com/assets/sctweets/sctwit3.mp3" controls="controls"><p>Sorry, your browser probably doesn't recognise the audio tag somehow...</p></audio></div>
<p>Abusing FFT buffers:</p>
<div class="highlight"><pre><code class="language-javascript" data-lang="javascript"><span class="nx">play</span><span class="p">{</span><span class="nx">f</span><span class="o">=</span><span class="p">{</span><span class="nx">LocalBuf</span><span class="p">(</span><span class="mi">512</span><span class="p">)};</span><span class="nx">r</span><span class="o">=</span><span class="p">{</span><span class="o">|</span><span class="nx">k</span><span class="p">,</span><span class="nx">m</span><span class="o">|</span><span class="nx">RecordBuf</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="nx">Pulse</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="mi">8</span><span class="p">,</span><span class="nx">m</span><span class="p">,</span><span class="mi">6</span><span class="nx">e3</span><span class="p">),</span><span class="nx">k</span><span class="p">)};</span><span class="nx">r</span><span class="p">.(</span><span class="nx">a</span><span class="o">=</span><span class="nx">f</span><span class="p">.(),</span><span class="mf">0.99</span><span class="p">);</span><span class="nx">r</span><span class="p">.(</span><span class="nx">b</span><span class="o">=</span><span class="nx">f</span><span class="p">.(),</span><span class="mf">0.99001</span><span class="p">);</span><span class="nx">Out</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span><span class="nx">IFFT</span><span class="p">([</span><span class="nx">a</span><span class="p">,</span><span class="nx">b</span><span class="p">]).</span><span class="nx">tanh</span><span class="p">)};</span><span class="c1">//44.1kHz:)</span></code></pre></div>
<div class="audio-player"><audio src="http://www.earslap.com/assets/sctweets/sctwit4.mp3" controls="controls"><p>Sorry, your browser probably doesn't recognise the audio tag somehow...</p></audio></div>
<p>Granular sampling (uses one of the infamous default sounds that ships with sc):</p>
<div class="highlight"><pre><code class="language-javascript" data-lang="javascript"><span class="nx">play</span><span class="p">{</span><span class="nx">t</span><span class="o">=</span><span class="nx">Impulse</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="mi">75</span><span class="p">);</span><span class="nx">Sweep</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="nx">t</span><span class="p">,</span><span class="mi">150</span><span class="p">).</span><span class="nx">fold</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">)</span><span class="o">*</span><span class="nx">PlayBuf</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span><span class="nx">Buffer</span><span class="p">.</span><span class="nx">read</span><span class="p">(</span><span class="nx">s</span><span class="p">,</span><span class="s2">"s*/*"</span><span class="p">.</span><span class="nx">pathMatch</span><span class="p">[</span><span class="mi">2</span><span class="p">]),</span><span class="mi">1</span><span class="p">,</span><span class="nx">t</span><span class="p">,</span><span class="nx">Demand</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="nx">t</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="nx">Dbrown</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span><span class="mi">2</span><span class="nx">e5</span><span class="p">,</span><span class="mi">2</span><span class="nx">e3</span><span class="p">,</span><span class="nx">inf</span><span class="p">)))</span><span class="o">!</span><span class="mi">2</span><span class="p">}</span></code></pre></div>
<div class="audio-player"><audio src="http://www.earslap.com/assets/sctweets/sctwit5.mp3" controls="controls"><p>Sorry, your browser probably doesn't recognise the audio tag somehow...</p></audio></div>
<p>Thirsty anyone? Water sound (to my ears):</p>
<div class="highlight"><pre><code class="language-javascript" data-lang="javascript"><span class="nx">play</span><span class="p">{</span><span class="nx">Mix</span><span class="p">({</span><span class="nx">a</span><span class="o">=</span><span class="nx">LFNoise1</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="mf">0.2</span><span class="p">.</span><span class="nx">rand</span><span class="p">);</span><span class="nx">DelayC</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="nx">BPF</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="nx">WhiteNoise</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="nx">Dust2</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="nx">a</span><span class="o">*</span><span class="nx">a</span><span class="o">*</span><span class="mi">4</span><span class="o">**</span><span class="mi">2</span><span class="p">).</span><span class="nx">lag</span><span class="p">(</span><span class="mi">8</span><span class="nx">e</span><span class="o">-</span><span class="mi">3</span><span class="p">)),</span><span class="mi">10</span><span class="nx">e3</span><span class="p">.</span><span class="nx">rand</span><span class="o">+</span><span class="mi">300</span><span class="p">,</span><span class="mf">0.09</span><span class="p">),</span><span class="mi">3</span><span class="p">,</span><span class="nx">a</span><span class="o">*</span><span class="mf">1.5</span><span class="o">+</span><span class="mf">1.5</span><span class="p">,</span><span class="mi">45</span><span class="p">)}</span><span class="o">!</span><span class="mi">80</span><span class="p">).</span><span class="nx">dup</span><span class="p">}</span></code></pre></div>
<div class="audio-player"><audio src="http://www.earslap.com/assets/sctweets/sctwit6.mp3" controls="controls"><p>Sorry, your browser probably doesn't recognise the audio tag somehow...</p></audio></div>
<p>Easy IDM:</p>
<div class="highlight"><pre><code class="language-javascript" data-lang="javascript"><span class="nx">play</span><span class="p">{</span><span class="nx">AllpassC</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="nx">SinOsc</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="mi">55</span><span class="p">).</span><span class="nx">tanh</span><span class="p">,</span><span class="mf">0.4</span><span class="p">,</span><span class="nx">TExpRand</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="mi">2</span><span class="nx">e</span><span class="o">-</span><span class="mi">4</span><span class="p">,</span><span class="mf">0.4</span><span class="p">,</span><span class="nx">Impulse</span><span class="p">.</span><span class="nx">ar</span><span class="p">(</span><span class="mi">8</span><span class="p">)).</span><span class="nx">round</span><span class="p">([</span><span class="mi">2</span><span class="nx">e</span><span class="o">-</span><span class="mi">3</span><span class="p">,</span><span class="mi">4</span><span class="nx">e</span><span class="o">-</span><span class="mi">3</span><span class="p">]),</span><span class="mi">2</span><span class="p">)};</span><span class="c1">// #supercollider with bass please...</span></code></pre></div>
<div class="audio-player"><audio src="http://www.earslap.com/assets/sctweets/sctwit7.mp3" controls="controls"><p>Sorry, your browser probably doesn't recognise the audio tag somehow...</p></audio></div>
<p>Please follow and contribute if you like monkeying around with SC!</p>