I’ve taken this data portrait assignment and decided to challenge the definition of what a portrait might be. I decided to use one of my selfie’s from Max Dean’s course and create an Audio Portrait using the data found in the photo’s pixels. The photo is pictured below:
Just an average selfie of me in the gym right? Well, I took this image and ran it through a software called AudioPaint. This software takes the information in the image pixels and translates them into sound.
“A picture is actually processed as a big frequency / time grid. Each line of the picture is an oscillator, and the taller the picture is, the higher the frequency resolution is. While the vertical position of a pixel determines its frequency, its horizontal position corresponds to its time offset.
By default, the color of a pixel is used to determine its pan, the red and green components controlling the amplitude of the left and right channels respectively (the brighter the color, the louder the sound), and the blue component is not used. The action of each component can be modified in the Routing section of the Audio Settings window. Starting with version 2.0, AudioPaint can also convert the color components into HSB values, and use hue, saturation and brightness instead of red, green and blue.” – http://www.nicolasfournel.com/audiopaint.htm
Using this software this is the result and what I am calling an Audio Portrait of myself.
http://clyp.it/l0131yfg.mp3 (click link to listen)
Hopefully this little experiment can influence more to be done with these types of softwares. I feel like this idea can be pushed even further as to people creating music solely by designing long strips of images to run through the programs and output rhythms and beats.