Computational Audio-visual Temporalities
Kynan Tan
“Polytemporal.05 (2016) is an audio-visual work that is performed live in a concert setting. When presented live, the work is loud and bass-heavy, with a bright, single-channel projection taking up a large proportion of one wall in a darkened room. The conjunction of sonic and visual here is hypnotic and entrancing, drawing perception towards the computational, temporal event.
The work utilises computer-generated sound created using a minimal number of sampled drum sounds (a clap and two bass drum samples) and a software FM synthesiser. This sound has been composed and organised using Ableton Live and Max4Live, making use of both non-real time arrangement techniques and generative algorithmic components. The work plays off of electronic dance music through the use of rhythmic patterns commonly found in its various genres. However, these patterns have been stretched out, looped over extended periods of time, and manipulated through changing tempo. The video component is produced by splitting the sound data output into three distinct channels and sending these to a separate programme written in C++/openFrameworks. This programme works to transduce these audio buffers into the visual domain as colour. It does this by taking, for each channel, 512 samples of incoming audio data and interpreting this as a 512×1 array of pixel data, with the audio data mapped, from its normal range of -1.0–1.0, to eight bit integer colour values, in the range of 0–255. This 512×1 array is then mapped to occupy the entirety of the screen, with the 512 data points spread across the pixels of the horizontal axis and each point copied to pixels in a vertical line until the screen space is filled. This results in vertical bars with colour intensity corresponding — through transductive algorithmic processes — to audio signal amplitude. These transductive processes correspond to producing a visual output that shares qualities with the data structures of digital sound, as the temporality of the sound buffer is given algorithmic priority, rather than adding detail within a 2D or 3D visual space.”
Kynan Tan is an artist interested in networks, relationality and digital systems of control, exploring these areas through artworks that are themselves multi-sensory relational structures. These works engage with digital aesthetics, code and data, taking form as multi-screen audiovisual performances and installations, 3D-printed sculptures, sound, and kinetic artworks of electronic circuits, speakers and lights.
Kynan has been the recipient of a DCA Young People and the Arts Fellowship (2013), Australia Council Artstart grant (2013), and participated in the JUMP Mentorship Program (2012), studying with audiovisual artist Robin Fox. Kynan has performed in Japan, Germany, New Zealand and throughout Australia, including events such as Test Tone (Tokyo), Channels Video Art Festival (Melbourne) and the NOW now Festival of Art (Sydney). His installation works have also been exhibited at MOCA (Taipei, Taiwan), NH7 Festival (Pune, India), First Draft (Sydney), Fremantle Arts Centre (Perth) and the Perth Institute of Contemporary Art. Kynan is currently a PhD candidate at UNSW Art & Design.