Code of Music

Sampling

Generative Atonal Guitars

Inspiration

In the list of helpful links that Luisa included on the course website, one goes to a free collection of samples from the London Philharmonia Orchestra. In browsing the links, I was stoked to. I hoped to create some cool rhythmic strumming thing that sounds like one of the two following examples:

As I describe later, this turned out not to be possible with my current solution as all the guitar samples vary dramatically in the timing of their recorded onsets. I instead went with trying to create some automated, ambient guitar generator.

Loading Audio

One of the first challenges was even figuring out how to load all of the audio. If you go to the project’s audio folder. you’ll notice there are over 100 samples of guitar notes to use. The title of each sample clearly defines its sound through 4 parameters - pitch, octave, amplitude, type where type is either ‘normal’ or ‘harmonics’. This seems amazing, but one quickly notices that it is incomplete - all pitch classes are covered, but each pitch only has certain octaves, amplitudes, and types. This is a challenge both for loading it all, as ideally one would prefer to avoid manually typing the whole list, as well as playing since not all notes are available.

My first attempt at this was to use Javascript error catching to prevent default behavior. I quickly found that 404 errors don’t count as errors worthy of breaking code and so my Try Except blocks did absolutely nothing. Then, puzzled, I tried loading the audio files, checking if they had successfully loaded by setting a variable to true and then adding the successes to an array. But, of course this is an asynchronous call so my code would move on before the variable would ever change to true. (This is usually a good thing because the user interface will still be fully functional even while data sources are loading.) Finally, I came up with a solution that uses a synchronous function (below) to check if the file even exists before attempting to load it. This solution worked, but had two limitations: First, it blocked the user interface at the beginning so there was noticeable lag after setup() got called. Second, no matter what it would print hundreds of errors to the console.

function urlExists(url) {
    var http = new XMLHttpRequest();
    http.open('HEAD', url, false);
    http.send();
    return http.status!=404;
}

I slept on it and came up with a solution that solves both of those problems. It requires an extra step, but I think it is worth it. I created a simple Node script that reads the audio file directory, generates a list of all the samples and their information automatically, and stores the result in a JSON. Then, once a user loads the webpage, this JSON file is loaded with syntax from the new Javascript Fetch API, and then all the instruments are loaded asynchronously so that the user experience is never affected.

Further Implementation

This project took a lot of time. It’s not as musical as some of the other projects, but I think its modularity and the way it is laid out is pretty solid and could lay the foundation for something truly great. This is the culmination of my understanding of how to link p5 and Tone libraries. An AI “RoboPlayer” creates a new random pitch set every time the user presses a button. (Beyond pitches, the “RoboPlayer” may also filter by how high the notes are, or their volume, or whether they are played with harmonics.) Right now, the pitch sets are predominately chromatic, but one direction to go in might include adding a bit of logic for how they are generated. The “RoboPlayer” triggers samples at a random tempo with limited probability, and from time to time it will play a whole bunch of pitches at once. Visual elements are automatically generated by the panning and pitch of each sample and their size is altered by reading the corresponding sample’s current amplitude.

For color, I wanted random colors, but I did not want total randomness nor did I want to predefine a set of colors. The solution I came up with is based on the p5 Pointillism example. I found an image with a color palette I liked and every time a button is pressed, a random point drawn from the image determines the total background color. In practice this works ok, but it would be nice to fade between multiple colors going forward.

Challenges

The main problem with this particular library is that the onset of many of these notes is not at the beginning of the recording. For example, the A3 at forte note takes over a full second for the note to be heard. This is clearly problematic for creating anything rhythmic since the onset is totally unpredictable. It is also challenging for visualization - all audio files produce noise when playing which creates amplitude and affects visuals without making any proper sound we care about.

Another thing that led to frustrating bugs early on: Coming from Python, I did not realize that Javascript does not support named parameters!

Future Ideas

Play it. (Supported by Chrome.)

View Source in Github.


A blog for displaying Willie Payne’s progress in the NYU ITP Course “The Code of Music.”