Chromatic Composer – Read the drawing

Since the beginning of October ,when the idea of the Chromatic Composer started to be more clear, we decided to start our way with the reading of the drawing side of our project. The aim of this decision was to be explore the possibilities of reading an image as well as mapping it in sound.

The first part was to research on sound properties as well as to find software and libraries that could help us to work and analyse sound.

Our first thought was to work with Pure Data, first because some of our examples of previous applications using visuals to play sound used PureData, and second, because we had some experience with the program.  After running some tries and achieving the communication between Processing and PureData and playing with sample synthesisers and loops, we found crucial to establish the mode of reading our image first.

IMAGE READING

Following the logic of the previous works we found online, we played with programs such as ImageJ, that is an open source image processing program developed for health purposes( RGB analysis, measurements, angles, blob, edge detection, blob count,… ) and we realised that we didn’t have enough information to make it communicate and work under live input, Arduino, Processing,…

Screen Shot 2013-11-12 at 12.24.13

and others like reacTIVision, (TUIO) an object tracker with already implemented markers (fiducial or simple). This one was really easy to use but not enough for our idea of colour reading. Besides, the markers are already known for everyone and they look just as a language.

Screen Shot 2013-11-12 at 12.44.43

After the unlucky experience of trying wrong paths to achieve our reading, we went back to Processing to start from there with image analysis. We started with OpenCV library, but again the common applications (blob detection, threshold,…) were not a solution for our colour reading aim.

Then, we realise that we could simplify the idea by working with RGB and analysing the image according to those 3 colours. It seemed to be a fast and semi- stable way to read in real time.

We decided to go in the path of extracting the colour of pixels, first, and then tracking colours.

The firsts tries were working with one colour reading and still images with extreme colour changes from pixel to pixel to see how the reading behaved.

The following image represents the change of reading in RED, from a pixel in black to a pixel in red.

Screen Shot 2013-11-14 at 01.04.19

Screen Shot 2013-11-14 at 01.04.13

The variables obtained in this type of reading are one for each colour and the velocity depends highly on the size of image. We decided to keep this reading because it allows our machine to read and work independently of a user interaction.

On another hand, we also needed a type of reading that could make richer the outcome (sound) of the reading once the user interacts, one that could react to movement. For this aim, we started experimenting with colour tracking in live video. This reading by itself can give us 6 different variables, 2 per colour corresponding to the position of the colour in the screen (x, y).

Screen Shot 2013-11-14 at 00.55.21

AUDIO

With these readings we realised we had too many variables to play with and decided to change the audio software to Supercollider, a software that gives a lot of freedom to work in net, as well as in playing many coded synthesisers with several variables in real time.

The first tries of sounds were simple sine wave oscillators triggered by colour and amount (range and frequency). Each colour was mapped into a range of frequencies. Blue to send a value for low frequencies (40-300Hz), Green for middle frequencies (301-1500Hz) and Red for high (1501-10000Hz).

Later sound prototypes have the inclusion of more complicated and different sounds, including some attempts with harmonics depending on colour, for one side, and the use of positions to affect the speed of reading, the sustain of notes, the characteristic of filters,…

At the end, the inclusion of so many different instruments (synths) and variables led to a not understandable and soon annoying sound. In the later versions of the sound code we tried one by one the instruments and adapted them to the characteristics of the reading and mapping. For example, a flute like sound, is more enjoyable and melodic in middle range frequencies and short sustain.

For the last code, we decided to shorten the amount of synths to just 3, one for each colour, and the use of position to change speed of reading.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s