I covered this application in a previous post, if you missed it you can read it here, however that wasn’t an opportunity to look at how the code in the sketch is working, so I’ll cover that in more detail here.
The import statement makes the SoundCipher library available for the Processing sketch to run. The asterisk indicates all classes in the library are available for the program to call functions from.
Three instances of the SoundCipher class are then created; these are labeled sc, sc2 and sc3.
An array of floats is then used to set pitch. The root and dominant pitches are repeated several times in the list, so that it increases the likelihood of their selection when randomly choosing a value from the array.
Other variables are also declared before the setup and these include the length of the pitchSet array, the key or root note offset, used to transpose the pitches and the note density setting (effectively how many notes are played and the spacing) which is changed to create more sparse or more intense musical passages.
The setup() method sets the frame rate for draw() to 8 frames per second, this is great for music but not so brilliant for drawing functions. The JavaSound synthesizer is being used to generate the instrument sounds, which are playing the musical notes you hear when drawing to the screen. These correspond to the General MIDI specification and in this code sc3 is declared as sound 49, which is “String Ensemble”. A complete list of the GM Instruments is available at Wikipedia’s General MIDI entry. The first two instances use the default instrument which is the acoustic piano. The only remaining two points are the size() declaration, which is 800×800 pixels and the background() which is set at (255), corresponding to white.
The next section of code really does all the work.
It is called 8 times a second and each time it is drawn it might play a note on each of the three SoundCipher instances. The first part depends on whether a random number is less than the density value. If this condition is met, then a note is played, a random fill colour selected for our chosen shape, and in this instance a bezier curve is drawn. Parameter values for these are subject to randomness providing variety and change in the shape, position and colour of the output being drawn to the screen and bezier curves produce beautiful shapes but it is worth noting a simpler primitive shape would probably be rendered faster, with smoother results. The bezier uses several mouseX and mouseY references, meaning that those values will track the horizontal and vertical position of the mouse within the display screen area.
Two mouseButton() system variables are used to create the black and white ellipses, these are dependent on the value of it being the LEFT button or RIGHT button, however they produce white or black fill() values, which is 255 for white or 0 for black, with a stroke() value of grey, drawn around the ellipses as a border and a strokeWeight() of 0.5 to produce fine lines for the borders. The two types of ellipse are drawn to the screen at the mouse x and y co-ordinate position, whenever the LEFT or RIGHT button is pressed, with a random size between 0 and 80 pixels defined by the variable float value assigned to the float d.
The second part executes every 32 frames and changes the variable keyRoot value, so the music is transposed and the density altered to a new value, and a long low pedal tone played.
The third part occurs every 16 frames and selects pitches for a chord, before playing it with a random dynamic. This is the part sc3 using the String Ensemble sound number 49. The chord pitches are transposed down an octave (-12) from the pitch set, moving them into a lower register than the piano sound.
The last section of the code handles the keyPressed() functions which allow us to bind code to be executed to specific key presses, in this case ‘w’ and ‘s’ which allows us to wipe the display screen, if the image becomes too cluttered or slows down and also more importantly to create the art print, we want for evidence and as a keepsake.
In order to ensure that the musical side of the sketch remains the focus, I have coded it to play an acoustic piano note whenever ‘w’ is pressed as well, this should ensure that even if you wipe the screen to continue drawing from fresh, you continue to play notes so that wiping the screen contributes musically as well, rather than creating a pause.
This application was originally coded with an interactive white or “Smart” board in mind as the main way of using it, for a small group in a classroom where there is 10 -15 minutes available in tutor time perhaps. If you look at the code you will see these marks // followed by sections of text, these are comments, intended to explain what is happening in the code, hence the comment in line 86 intended for Mia’s class, “For 6ML enjoy” which will be ignored by the computer when executing the code.
My colleague James Winchester (@jwinchester25 ) adapted this code to produce a switch version and has done some excellent work in creating a new kinect version based on the SoundCipher library to draw primitive shapes to the screen using closest point tracking, you can check it out at kinectSEN.
The code for this sketch was based on Processing 1.5.1, which is the current stable release, a new version of Processing 2.0+ is due to be released very soon, the links to the reference documents for Processing now link to the new version. Currently I haven’t tested this code on Processing version 2.0+, which at the time of writing is 2.0 Beta 6, however if you are interested in downloading the .pde file from processingSEN to run in Processing, please use the stable release 1.5.1 for the time being.
Thanks to Andrew R. Brown for the use of his tutorial sketch and library, further information about SoundCipher is available here.
The content on this page is available under
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
Further details of Creative Commons Licences and their use can be found here.