Make Some Noise

Monday 23rd October 2017

Day 1

This is the third project of second year that incorporates sound within Processing, similar to the direction I took my previous project. Today was about introducing us to the idea of coding sound and Processing’s command, and what deliverables we’ll need. This project is only 1 week long and so the deadline is on Friday.

Make Some Noise Overview

This brief project is an introduction to the implementation of sound within Processing. Students are expected to explore the sound capabilities of the software to produce a sound work. This can be either generative (changes over time on its own) or interactive (dependant upon user interaction), or perhaps a mix of both.

The emphasis here is on sound so graphics are not important. This is not to be a sound visualiser – rather of a sound generator. Responses could range from interactive mixers to generative instruments. A possible pathway for this project is to implement sound within an existing visual Processing sketch which you have previously created – to enhance and augment user experience.

Work Mode



Processing –
Max MSP:


A final sketch created in Processing.

Learning Outcomes

  • Design and build a simple interactive digital artefact using routine computational techniques and practices
  • Apply routine conventions of high-level languages: functions, codeblocks, variables, objects and classes, conditions, logic, to create an interactive digital piece
  • Apply routine conventions of interaction design


We had a look through some examples that are in the Processing example set:

  • Analysis analyses incoming sound –  leave this for now and possibly use for another project
  • Keyboard uses the laptop keyboard as an input – the example generates sound when different keys on the keyboard are pressed
  • Brown, pink and white noise are all different

The software Audacity is for all platforms and is free, otherwise use Adobe Audition.

Some properties of sound:

  • Sample rate: 44100 Hz
    – CD quality
    – I wonder what would happen if it was incredibly low
  • Channel:
    – Mono (use this to ‘pan’ in Processing)
    – Stereo
  • Bit Depth:
    – 16-bit
Sample Example
The pre-existing example in Processing of loading in a sound file.

.aiff = Apple Interchange File Format
– Apple’s audio format
– Probably stick to .wav

– So the sound doesn’t overlap, 2));
– Randomises the rate of the sound clip

Look on Processing reference page at Sound
– Effects:


After the tutorial with our tutor I stayed in the studio experimenting with some of the examples that Processing had to offer. I’ve not experimented with the Sound library that much as I only found it’s existence last project.

Java Examples

I mixed the SineWave sketch with the Keyboard sketch and attempted to recreate a keyboard using a laptop keyboard with frequencies from a Piano. The bottom row of the laptop play a C chord starting from middle C. I figured that piano keys don’t directly translate to frequencies that can be used with the Sine function.

What it says at the top of the sketch:

“This is a sine-wave oscillator. The method .play() starts the oscillator. There are several setters like .amp(), .freq(), .pan() and .add(). If you want to set all of them at
the same time use .set(float freq, float amp, float add, float pan) 


The frequencies on Processing only allow to go from 20 Hz to 1000 Hz so that restricts the notes that I’m able to put in.

Tuesday 24th October 2017

Day 2


Today we went through some more examples. Paul wanted to show us how to initiate a sound depending on the location of the mouse cursor. We started by created two sides of a sketch and printing either “LEFT!” or “RIGHT!” depending on the corresponding location of the cursor.

I forgot that the cursor doesn’t show when you do a screenshot. Sorry about that.


We then developed this to change the colour of each side of the sketch when the cursor crosses over to that side. It’s easier to draw a box with a stroke to draw the middle line, and then draw two boxes on top of it.


After that the natural development for this project was to incorporate sound into the code as well.

Unfortunately you can’t hear the sound either – also sorry about that.


We then decided to map different sound to keys on the keyboard. We did this by using an array so we could add numerous sounds, instead of introducing each sound independently which could take a lot time if there was a substantial amount of sounds.


We then went on to proximity sensing and initiating a sound when the cursor falls over a certain object.

Thursday 26th October 2017

Day 3

This was the day that my project came together.

Thought Process

Because this was only a one week project I didn’t have a large amount of time to thoroughly think of a concept and a way to follow it through, so I began to think of something simple. The methodology/process of Processing + sound, as stated by the brief, was already sorted, I just needed to decide on what input I’d like and what sound I’d like as an output.

Even though I’d previously been experimenting with the mouse as an input, I decided that I’d use a distance sensor that I’d acquired previously in the year as an input because I am interested in using input sensors with the Arduino board and I wanted to see if I could effectively manipulate attributes of sound with distance. There were many attributes of sound that I experimented with over the past few days and that I could have affected such as pan, frequency and pitch but I decided to go with volume. My rough idea was that if I made some sort of soundscape that a user can ‘walk through’. For example, between say 10cm and 40cm the sound of birds fade in, and between 40cm and 60cm some wind would be heard. I thought this would give the audio a three-dimensional feel which would be very immersive. So although my distance sensor would simply controlling volume, the wouldn’t be the main focus of the experience – it wouldn’t be what the user was drawn too. Hopefully they’d be interested in what sounds would be heard depending on where they walked.

Although I wouldn’t be controlling pan with the distance sensor I thought I’d still be able to include it in my project. I’d be able to give the effect of the birds being to the left of the user, or the wind coming in from the right. This would further add to the three-dimensional theme I was aiming for.

A cool effect to achieve would be ‘pinned noises’. I’ve randomly invented that term myself just now and I’m sure there is one but I need to find out what it is. What I mean by pinned noises is that say if you heard a sound to the left, if you turned your head to the left (which you would naturally do so to find the source of the sound) the sound would then be in front of you. This would mean I’d need to find a way to pin the sounds in a three-dimensional space but unfortunately I don’t have the knowledge or time to do so for this project. This means that the user will have to walk in a straight line towards the sensor with their head pointing forwards at all times, otherwise they’ll be taken away from the realism of it. It will also be have to be a single user at a time because multiple bodies will confuse the distance sensor and therefore the realism of the experience.

The hardware for the output is also a factor that will heavily influence my project. It will have to be either:

  • Speakers
  • Headphones

Using speakers would solve the problem of not ruining the realism when you turn your head, but all of the sound would be coming from the direction of the speakers. Therefore I’d need to set-up surround sound but I unfortunately don’t have that as a poor student. Headphones would create a really immersive experience but then the realism would be ruined when you turn your head as stated above. I’ll just have to make sure I get some headphones that have a long enough wire, or even better yet, wireless headphones.

Distance Sensor

This is the Parallax Ping))) distance sensor that I bought below:

The way in which this works is that one cylinder sends out a sound wave that’s frequency is so high it is undetectable to the human ear. This sound wave bounces off an object and rebounds back into the other cylinder. The time is recorded from when the sound wave was sent out and when the rebound returned. Then in the Arduino software I will write some coding that works out the distance using distance = speed x time. The speed of sound is 343 m/s and then the Ping))) sensor will tell you the time it took for the sound wave to return and then you can translate the distance into either inches or centimetres, whatever works easier. I used centimetres because I feel this is the measurement I feel best working with. You then take these numbers over to Processing. You can then set boundaries with these measurements as I stated above to bring in different sounds.

The Arduino sketch that I actually ended up with that worked:


One problem with the distance sensor is that there are a few anomalies where the distance sensor wouldn’t detect an object and send back a distance that measured the other side of the room. You wouldn’t get this problem when using a mouse as it would always send a valid input. These anomalies obviously cause a problem because if your set numbers are only within the range of 1 metre and then the Arduino suddenly detects a distance of 3 metres it messes with the output, in my case the volume.


Janet Cardiff and George Bures Miller’s 2012 sound installation piece, FOREST(for a thousand years …), incorporates the actual forest into an audio composition that is played from more than 30 speakers that are spread around the seating area that the users sit at.

“On a sunny day you hear the rustling breeze, but also the recording of a dramatically escalating wind that sounds intensely real. You sonically register that a storm is approaching, even though your eyes tell you otherwise; when you hear a branch loudly snap overhead (in the recording), you become instantly fearful and flinch. […] There are the sounds of war: whistling screeches, big explosions, the rat-a-tat of machine gun fire. There is a brief but shocking scream, a crashing tree, sounds of a mother and child, clanging metal. Singers come close, but then leave. You hear the trees and the wind again, and the crickets and birds.”

Janet Cardiff George Bures Miller, FOREST (for a thousand years…), 2012 [Online] Available at:

I like that this sound installation is a site-specific piece of work and that you’re fully immersed as all of the speakers in the trees surround you. I also find it interesting in the fact that it’s not music – it’s audio.

Another piece by Janet Cardiff and George Bures Miller, STORM ROOM (2009), transforms a normal room to one that is encapsulated by a storm.

“This piece shows that it is not safe even under a roof. Lightening and shadows of trees surround the windows. It shows you things normally not visible, creating a storm that can really be felt.

A computer controls the flow of water, the lights, the strobes, and the fans, etc. An ambisonic sound track plays through 8 hidden speakers and 2 hidden subwoofers. The piece begins as the storm approaches, with no water hitting the windows, then proceeds to the incredibly loud, floor shaking climax.”

Janet Cardiff George Bures Miller, FOREST ROOM (2009), [Online] Available at:

I like how they use sound to completely transform the atmosphere of a room. Although your eyes are telling you one thing, Cardiff and Miller have creates effects that trick your other senses into believing the unbelievable. I want to incorporate this idea into my project by transforming an atmosphere that the user experiences normally to one of my choosing.


There are many places that I could create my soundscape for such as:

A City:

  • Car engines
  • Brakes
  • Horns
  • Traffic lights
  • People talking
  • Footsteps on pavement
  • Shop doors opening/closing – bells

The Beach:

  • Waves
  • Seaguls
  • Children playing
  • Splashing
  • People out of breath
  • Walking through water


  • Leaves rustling
  • Wind, light breeze
  • Birds
  • Squirrels
  • Footsteps
  • Streams
  • Waterfalls
  • Branches breaking off
  • Twigs snapping
  • Animals foraging
  • Walking through mud
  • Weather – rain, storm, lightening, thunder

I thought I’d make my soundscape a walk through the woods because I could think of the most sounds for this location. I also have an interest in exploring emotion and mental state I thought that by creating this soundscape I could create the feeling of tranquillity and peace. It would be a momentary escape from real life, especially the city where my studio is based. Below are some photographs I took of a recent visit to some woods up at Port Ban. This is the kind of image I’d like my users to experience.

It will be a visual created through sound.

Everyone’s experience will be different.

As for the Processing I had to download a Krister library that effectively allowed me to manipulate sound. It meant that I could create boundaries using IF statements and Krister commands that fade the sound so it doesn’t just jump in. This adds to the realism as it’s as if the user is approaching the source of sound.

I ended up creating numerous sound channels and numerous boundaries for each sound. By doing it by each individual sound it meant that I could make some sounds overlap.

YouTube, Nature Sounds: the Forest [ Soundscape ], 2013 [Online] Available at:

This is a nice example of how sounds in the forest/woods can be used to portray the atmosphere of peace and tranquillity. I used this as inspiration of sounds that I could search for online.

I searched on for ages to find appropriate sounds that I liked and that I thought would create an effective soundscape for my project. In the end I found sounds for birds, footsteps over branches/twigs and a stream.

Friday 27th October 2017

Day 4

After a long night of coding my final result differed a lot from my initial thoughts above. I couldn’t get my £20+ Ping))) sensor working so I had to use a cheaper one I bought off Amazon that had 4 pins instead of 3 so this effected the coding a bit. The accuracy may have been effected by this too. Unfortunately with all the anomalies that my distance sensor kept sending it was effecting my volume too much and so I ended up having to use smaller distances – specifically between 0cm and 60cm. I used my hand close up to the sensor to demonstrate it working in the video as it was giving me more accurate results compared to a human body in a large room.

I also decided to change the concept a little bit, and decided to make it as if the user was walking away from sound of discomfort towards sounds of tranquillity. I thought this still effectively portrayed the exploration of emotions still that I’d originally wanted to include.

The final video of my test that I submitted on the university VLE.

In the future I’d like to get my distance sensor working so that it accurately detects the correct distance a body is away from the sensor. If I get this working with larger distances then it means the user can properly walk some distance with sounds coming in and leaving as they get closer (or further away?)

As for the sounds, I can maybe go out and record some of my own.

For a one week project I feel happy with my outcome and I feel like it is a good starting point for projects in the future.

See Living Walls

Previous: Control – Week 4 (or Control – Week 1)

Next: Design Domain


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Create a website or blog at

Up ↑

%d bloggers like this: