///////

infinite grain 09: sinan bökesoy

sinan1x

[infinite grain is a series of interviews inspired on microsound procedures, exploring a wide variety of topics in dialogue with artists who work with sound on installation, composition and improvisation]

Algorithms have a special magic, a mystery that lies in life itself, from which all code is actually based. Programming computers involves a series of alteration processes that can unfold naturally to reach profound territories, like sound, which thanks to algorithms, can be decomposed, dissolved, sculpted, regrouped and structured in inexhaustible forms. If by nature, the flow of sound is complex enough, now processed by machines able to control various aspects of sounds, we have in our hands a way of discovering vast universes of unknown listening realities which are the result of an actual encounter of sound on its very own material conception, with that also its atomic theory, in which this so-called “sonic matter” gets studied, structured and manipulated in particles and groups of them, revealing a granular scaffolding inside the sound object.

This microsonic meeting is translated as a microcosmic encounter as such: all sounds are apparently made of existing grains or particles, in the same way as the sound emitting bodies are made of atoms, cells, etc. This sound dimension that arises between the recesses of time in milliseconds and seconds focuses on the world of the micro, object and meso scales, where a lot of sonic interactions happen, and can even me controlled and configured in particular ways.

This gives the artist the possibility of dealing with sound in a completely new way which is not just limited on working with mere objects or events, but also networks; because sounds relate each other, and each sound is actually a microsonic universe, a network of grains. But giving the artist such space, leads him to control de interactions of microsounds happening everywhere. This is when the conception of a tool like cosmosf, enters the game; a tool created specifically for the discovery sounds resulting from complex inter-scale relationships of sonic atoms, formulating the possibility of finding not just new microsounds, but new sonic compounds and universes as such, all resulting from the interaction of particle structures and objects in different scales, entangled in their own complexity.

Cosmosf applies microsonic processes over materials, both generated and recorded/sampled. It has deep modulation and processing engines, and of course its very own ways of synthesis. The structure of the signal flow is special, as is intimately related to the fragmentation of sounds and the generation of new materials from their interaction in different scales of time. The core of the engine is dominated by powerful stochastic processes that affect the whole cycling structure of the program, resulting in a myriad of ways of creating complex objects from modulating, multiplying, resampling, processing, relocating and resembling these small sound corpuscles that roam alone or in mass around our ears.

cosmosfvsaturn

The whole result is achieved by a particular system that links the processes of signal transformation in a way in which sounds are structured not just according to how the amplification and relationship of certain frequencies are modulated over time, since the in the whole system, time is a primary control, which alter the scale and dimension of the frequency interactions. In that way, we could say cosmosf is an instrument that conceives the exploration of sound based in the control and experimentation of network-based sounds, in which several scales interact, deeply affecting the sounds obtained.

This time control lets the microsonic process to emerge from multiple parameters of control that allow a much broader range of meanings in the interaction of the grains and objets, because it establishes control over the relations between the time scales as such, where methods of stochastic probability manage to integrate the generated grains in unimaginable sequences or structures. That makes cosmosf to some good processing power in the computer, because their processing and synthesis engines are quite elevated, dedicated to a lot of processes in real-time.

Such interesting piece of software is the result of deep research and dedication of just one person, Sinan Bökesoy, dedicated to computer music research and composition, responsible for programming amazing pieces of software in which not just the theoretical research is printed, but also a more personal, deep and aesthetic pursuit, a way of expressing artistically through the algorithm itself. So, in order to explore Bökesoy’s perspective carefully, in detail, we arranged the following interview.

How did you get interested in sound and specifically the development of sonic software?

I was found of classical music, but whenever I found some opportunity to meet the “electric sound”, it ended up with building a career of full investigation of it. The turning point for me has been my visit to ICMC1997 in Tessaloniki. I have witnessed years of digital audio development, from the era of hardware equipment / sound modules to the plugins on the DAW environment tied up with the progress of the personal computer technology. The development of music software, or doing some tools for sonic interaction has been possible for me first on Csound and MaxMSP platforms late 90’s, but I have also a serious music production experience for media where I have consistently used the tools of the recording business. Another turning point for me was moving to Paris to continue my studies in computer music beginning at CCMIX with Gerard Pape in 2001. Being active in composing, performing, researching and developing these years I have been always engaged with the sonic business especially on the experimental scene. It is essential to design your own tools to bring some fresh look especially in computer music. In 2011, I was interested in Openframeworks and coding in C++, which lead to the development of audio applications in Juce platform. Now I am working also on recent projects in iOS / Objective C development platform.

How people like Xenakis has influenced your way of conceiving sound and working in that line between mathematics and aesthetics? In which ways? I guess that is very present in Stochos and is in a way transcended in Cosmos.

My studies on Xenakis and contemporary composition at CCMIX in Paris (Centre de Creation de Musicale Iannis Xenakis) was of course the best opportunity to meet Xenakis philosophy at first hand and the Xenakian composers like Gerard Pape, DiScipio, Horacio Vaggione, Curtis Roads, Wishart and others..Sonic construction as composing the sound, and composing with the sound on various levels (micro – macro) and with computational tools simultaneously has been a continuous investigation of Xenakis school along with the developments made in technology delivering it. With the technology I don’t consider just the programming tools and computers, but as you say the mathematics and algorithms which organises the sonic structure. From calculations on the paper to preliminary digital audio rendering hardware (UPIC, GenDyn) Xenakis has made progress on the implementation of his own tools leading to his legendary musical results.

These studies taught me to develop a combinatory gesture in the creation process of audible results, be them artifacts or personal favourite choices.

One remark, Xenakis had no certain issues with the definition of the impact of mathematics on musical aesthetics. The aesthetics issue is a shifting ground depending on many factors. Practically the inspiration of his implementation ideas and application methods comes directly from the nature, and we naturally don’t judge the aesthetics of the nature. It is being manipulated by men which we do cricitize. Xenakis had to apply some interaction and mapping strategies to the structures found in natural processes, conceived as abstract numbers but mapped to musical parameters. Let’s say the DNA of his music has been constructed by himself as a mapping of some natural phenomena and not by the history of music and its legends. I have seen this inspiration also on other composers of 20th century, however with different use of technology.

Besides, the seed of Xenakis musical ideas comes from Mythology, and roots of philosophy as universal archetypes; in this case the human nature. Myth and music in form of sonic structuring. He has been one of the prominent figures capable of the design of many things together in a breakthrough way simultaneously within holistic approach.

Right at the beginning of my studies through the Xenakis “Formalized Music” book with Gerard Pape; I have felt the necessity to immediate delivery of tools, touchable, playable and open to experimentation. Otherwise what we have in text form with full of intriguing ideas of Xenakis wouldn’t help me on its own to enter this holistic gesture of creation process. This act is valid I guess for any kind of music genre. If you really want to learn from Beethoven, you have to find a way to touch it, feel it such an energy transfer.

Therefore I have started with developing Stochos, and by exchanging ideas with Gerard Pape maybe the first tool was born out of Xenakis ideas belonging to modern technology with real-time action and applied to digital sound synthesis platform as it should be. Its strength does not lie just in the user interface ( a classic maxmsp look) or the realtime response etc. but in the success of this combinatory approach of ideas, platforms and designing something usable for composition. We did present pieces composed with just Stochos on various conferences and events.

cosmplug3

One of those advances over the theory proposed by Xenakis is your extension of the stochastic operations not only in a single scale but several of them. I wonder what are the challenges toward the programming aspect of it and also how do you think it contributes to the way we can conceive, transform and explore the structure, distribution and behavior of sound as material over time.

Micro and macro structures considering time and space as parametric expansion has been a fundamental approach for composer greats in 20cc. music. Hence Xenakis’s algorithmic results with stochastic distributions were delivering mono layered evolution on single time scale and the mix of the multiple layers later were delivered by the composers interaction. Evidently, the margins of the stochastic processes can be perceived as audible results. A very fast random change in a sonic parameter will lead to noise, which again is not perceptually interesting as our hearing system judges this as repeating dynamics. Interesting sounds have timbral change over time, and the question is always how to construct them in the case of algorithmic approach. Xenakis has worked on this matter both on the note level, the score and orchestration with acoustic instruments, and also on the waveform level with the digital tape and computer software beginning 90’s.

Regarding the processes, his attempt to apply the Markovian Stochastics for bringing a higher complexity on the process of event distributions are characteristic but not the ultimate solution for generating this process of change in time, and of course evolution on multiple time scales. Also all of todays granular synthesis applications do operate accordingly on a single layer of interaction. The events are distributed with simple mechanisms and parameters of sonic entities such as density, particle pitch etc are altered individually. There is no interaction between the time frames of event distribution process inherited in this computational mechanism of it.

First with the development of the Stochos software back in 2002, I could investigate the ideas of Xenakis and apply them to modern synthesis environment. This gave me the opportunity to understand the concept, experiment it and extend it. So the seeds of the Cosmosƒ software were thrown related to these expansion plans. The question was how to conceive a perfect reductionist attempt and I had to just look to the nature; bottom-up and top-down construction on multiple layers, self-similarity, self-organization on multiple time scales. There is a rich literature on multiple disciplines investigating the system theory, modelling of complex phenomena and controlled chaos.

What do you find interesting in the stochastic operations and functions in terms of the possibilities one can get when exploring sound transformation and the structural characteristics of timbre?

The system itself gives us some parameter control to drive this self evolving structure, which is essential to bring it as a compositional tool. The composer has a certain access to the operations leading to the favorite results, however underhood there is an immense amount of rendering process which is handled by the computer through these conceived processes. Here the computer delivers these calculations as audible phenomena where the composer can interact on the musical level. Stochastic operations are something which we do experience on many events around us through our senses, and these natural processes can be heard instantly as sonic mapping processes. Fascinating enough there is also the appealing visual aspect brought again through the mapping of this data. Cosmosƒ uses many statistical functions to bring the models of such event distributions audible and visible in a rich space. Controlled randomness towards complexity achieved on multiple time scales. Back to the aesthetically judgement issue there are no certain best practices in how to use them. So your question can have many different answers. My personal attempt has been to provide a tool which can deliver sonic results where the other software tools can’t do. Obtaining something original and avoiding clichés can be sometimes priceless in music production. Cosmosƒ offers a pilot seat where this immense amount of data flow can be heard, seen and controlled instantly. This power is what the computers should offer us as sonic data renderers for computer aided sonic design, and Cosmosƒ is one successful interface / model of it.

I find very interesting and useful the approach you take on the density patterns, as it offers tons of new possibilities for not only obtaining new timbres but also for understanding how this micro-sonic networks develop the wider macro-structures. How do you think this density factor of granular audio can be properly understood and expanded in practice? Do you find any particular aesthetic consequences?

Sound and music emerge from each other and each moment of it can be analysed by different approach. Everything can be scaled down into its particles, the organisation of them and of course their density as quantity and spreading in space. Xenakis has defined this space in vector form with parameters of sound such as intensity, pitch and timbre qualities. The density patterns are one way to watch the evolution of sonic structure as event quantities in a certain time frame. It is also one primary factor for organising the micro events. There are phenomenal aspects where we can witness that by just changing the density, hence the organisation of micro sounds in a certain time frame we can change perceived loudness, pitch and timbral characteristics all together.

The change in density patterns can have high impact on the perceived sound on higher scales too, this is a psychoacoustic phenomena. And with careful composition of this, one can compose the sonic form, the gestalt of it.

We have enormous amount of possibilities however a minimalist approach to control this process of change is quite useful and fascinating, which is something I especially experienced during my studies in CCMIX with Gerard Pape. A bottom up approach in sonic construction with minimum gestures.

So Cosmosf is a result of your Phd work on sonic time scales and their actual relationship. Could you please tell us more about How was the research behind the creation of cosmosf and the process of conceiving and creating it?

I have continued my studies at University of Paris8 with Horacio Vaggione, interestingly where also Curtis Roads had his Phd. Through my studies on Xenakis, and after having developed Stochos application, I was simply wondering what would be the most simple gesture to enhance the evolution of these models. The directions were then increasing the complexity, breaking the perceived margins of stochastic distributions, a richer sound but again the simplest model which would achieve it apart from Xenakis approach. So combining self similarity with multi time scale distribution of these event cells would be the direction. Later I have added the sonic travel from the output of the system towards its seed, the micro events as a form of continuous loop. My paper in DAFX06 mentioning this model was highly appreciated where I discuss the mathematics behind it as well. My Ph.D is basically the path I have taken as a research process and the outcome of it.

I had to explore myself these things, these immense background of Xenakis and others composers around me, with new tools combining my vision and self. This has been the primary motivation on developing these tools. Once the route has started it was not just a Ph.D. study but a personal challenge.

The process that happen at the meso scale are often mysterious, although very rich as they offer a territory of time that allows to discover pretty amazing sounds and microsonic interactions. In your research and creative process, did you find any specific qualities of the meso scale? How do you value it around the micro-macro processes?

In Cosmosƒ the meso scale can belong to very fast operations or slow ones. There are no territories but references to meso to micro or macro. Music is a phenomena which we can observe and sense on multiple time scales. I don’t see this in any other art form. Implementation of ideas on various scales and top down break through of a compositional period of time has created so many practices of handling this phenomena. You can operate on time and deceive the perception of it. Cosmosƒ gives some possibilities to focus on separate layers and watch the result with global interaction. But of course it is not the only tool which offers some solutions. I believe the mathematical abstractions on the facts of compositional gestures clearly show us the traces of this ever since of Bach.

I really love the fact that cosmosf is created with a cycling dynamic in mind, as the macro event once formed, can go back to the chain and evolves in new ways in a feedback that is returned right to the micro-event line. I wonder how this self-evolving pattern affects the possibilities of the processed signals and what is possible with that. How did you encountered with such a complex and interesting idea?

This feedback processes as sonic interaction is a fascinating subject best described and pioneered by Agostino DiScipio, one of my professors. The possibilities are terrifying and Cosmosƒ cycling audio buffering system can easily drive to sonic territories where one has never been before. I have proposed this again on my paper of DAFX07 “Synthesis of a macro sound structure within a self organising system”. Here the degree of this self organisation varies. Cosmosƒ is still no cybernetic system, it has no intelligence given that it does not observe its own happenings and decide what to do next.

!But the possibilities are still terrifying, so constant experimentation is the only key to discover the facts. Sometimes it is so difficult to repeat what you did before again the same way. Before we have been in a very limited sonic space with the synthesis tools of the industry as it has demanded for it. Now with Cosmosƒ, an audio professionals challenge is to dive into new paradigms and survive there.

cosmosfv22morph1

As I remember Curtis Roads mentions, this conception of microsound allow us to understand sound in a completely new way, as it is not only a wave phenomena but also a particle-based manifestation… Have you found that in your microsonic explorations? What kind of aspects have you found interesting on developing sound tools from this microsonic conceptions and what do you thin it’s different with today’s technology regarding the initial experiments on this field and how do you think microsound affects the way we not only develop sounds but also perceive them?

The particle behaviour as “acoustic quanta” definition has been proposed by the scientist Denis Gabor beginning of 20th century; let’s say the physical facts have been known since long time. However the practice with this knowledge in a composable medium with concrete sonic results were available only with the aid of digital audio technology. And this is when Curtis Roads was able (among others) to suggest his purist approach in micro sound manipulation. It is clear that granular synthesis as defined in North America in the computer music research has a certain frame with its application. On the other hand applications like Stochos and Cosmosƒ have a liberation on each sonic entity by richer definition with sonic parameters in its vector space, such as Xenakis has already suggested however didn’t have the means to realise on the digital domain. This is the event-based representation. Besides, a master of digital micro sound composition, Horacio Vaggione (my thesis director) has his own tools developed on SuperCollider. A composers movitation is not just proving the concept of technology being used but bringing it to a melt down in his creations, as a total integration, not the sauce of it. Micro sonic explorations should lead to composers’ own articulations. Tools developed with Xenakis ideas should not necessarily repeat his musical results as well.

In digital domain we work on arbitrary samples which offers a high-resolution time / frequency domain representation of sound. And this suits well to the particle representation of sound where you chop a number of samples and give an amplitude frame to avoid DC offset clicks in the beginning and ending of the sample chunk. The wave representation of the sound deals with the operations beginning with Fourier analysis, another way to say the operations on the spectral domain. The spectrum of sound and the spectrum of light and the particle behaviour of sound and the light do have very similar consequences in its acoustic behaviour, such as occlusion, refraction, diffusion etc. There is no doubt that these models are valid. We have standing waves in a woodwind instrument, but the friction of the arch on the string instrument generates the particles behaving as an exciter on its body.

How do we chop samples ? And how do we perceive the organisation of them where a single micro event playback does not even give an audible impact in our hearing system? Working on such domains needs advanced tools and this is where Cosmosƒ come on to the scene as a digital audio lab. We can perceive the micro sound only when we organise the stream of it, and there are many ways to do it. The gain here is of course unique sounds, a part of the universe where we couldn’t go until now.

In cosmosf, each event can have its own synthesis engine, besides the controls of the event as such. How do you think this affects the sounds obtained? Also, how is the relationship of that chain with the rich modulation system you applied in there?

The problem is trying to describe an audible phenomena in text form. These things can be explained on various forms on scientific papers or with philosophical approach. But I have seen musicologs commenting on Cosmosƒ as “it is a granular synthesizer with Xenakis ideas”. Sadly this is how they perceive these tools without using it and they can’t predict well without using it and describe it in text form.

As the front end of Cosmosƒ; the app has a self-similar structure and this has been implemented on its user interface as well. The modulation scheme operates on micro events and meso events independently. A macro event distributes meso events and each meso event distributes micro events frames with its event duration. This is how the multiple time scales are implemented within an interdependency in the structure. This brings a higher level complexity with natural connections of this bottom up construction. However this is totally controllable and visually interactive. The particular effect of an upside construction is the process of change from micro towards micro as a vertical image. It is hard to describe or regard it in expected ways. There is a simple video exposing the stochastic morphing engine on Cosmosƒ exhibiting maybe the wildest sonic range one can hear in this length of the video. It has been done with simple start material but the parametric transitions in continuum driven by the morphing engine suggests a small universe which is still big for us!

The parallel universes feature is also very unique. I wonder what are the possibilities you have found with that feature. Could you explain us why is that option kind of revolutionary for your instrument and way of synthesis?

Parallel universes is a metaphysical concept; it suggests that like our universe with the same dynamics there is another universe rolling on somewhere. As you know, synth plugins have multitimbral polyphonic structure where you can assign different instruments on different channels. So if you press a midi key, then layers of these voices with different instrumen1t assignments start to play. The question for me was how to implement a polyphony but in Cosmosƒ unique world. The answer was the parallel universe; both universes have different chemistry but similar dynamics and they will come up with different distributions of events because of the randomness aspect in each universe. At the end very complex results can be obtained because you can feed the output of one universe to the other etc. There can be different clock rates between the universes and other macro parameters shifting the behavior between these universes.

And talking about the synthesis engine, how is the integration of the different modes of synthesis and how do you think the micro-generation of digital signals is related to the micro-meso processing of external samples? I find both sides really merged in the way cosmosf works.

The waveform generation part of Cosmosƒ is very straight forward and maybe the least original part. Actually I have implemented a well-known paradigm of oscillator handling which can use digital external samples and simple waveforms like in analog synthesis. As you would agree, the phenomena here depends less on how you do generate the micro sound but the power is what you do with it and how you do organise the micro events; hence the bottom up construction mechanism. I found the processing tools like filter section, ring modulation and the granular delay as a bonus to operate in real time on micro level and meso level events independently.

We are here on a scene where a small change on the micro level parameter can expand as big consequences, (a very cliché phrase of controlled chaos subject) and happenings in the universe in macro time span. However this is what to study primarily within Cosmosƒ.

IMG_0459

I wonder about the aesthetic implications you find on programming these kinds of tools and if you keep in mind the artist’s point of view. How is your relationship with the people who uses the tool and how those experiments and artworks influence your programming?

Honestly all these results of programming and the visual design inherit my personal choices and subjective interpretations. Designing the tool of creation, rather than creating with already designed tools is an extraordinary fact I guess since DaVinci, but today the computers give so much power on the hand of the creative person / artist and if you know how shape this designing your own tools becomes an ordinary fact actually a genre called creative coding. I think products with collaborative design and work flow are being shaped different then products developed through one mind, and its holistic gestures. This is certainly a privilege for me.

I also like the graphic representation of the actual events, as the particles, cycles and different forms are always visible in a unique way that allows to understand a little more of how these things work. What’s your concept behind that visual representation and how do you think it affects the uses of the tool itself?

Ehm, I am not graphic artist at all using Photoshop etc. The whole graphical concept of Cosmosƒ has been prepared on OSX Keynote app with rectangles and circles. However as with the fact of the multi layered sound, the graphical output also benefits from this as a natural beauty of mathematics behind it; be it on the universe cycle display of events or the morphing engine or the Sieves engine. I have just put things in a simple way then have expanded to this organisational beauty on their own. At the end Cosmosƒ UI has been exemplified many times on design blogs etc.

When thinking about the sonic technological development in the future, is there something you would like to see? I’m thinking both in the possibilities for programming as the options for performing.

I am really fascinated in the field of algorithmic sound design, and I see many possibilities of further applications in the field of cybernetics and artificial intelligence. For me it is not about composing again a Bach fugue by the computer but other possibilities such as creating immense big data and manipulate it to create the seeds on various levels, from inaudible abstracts to resynthesis by analysis made by the computer and learning from the composers favorite instant choices.

I guess there will be paradigm shift with computer music softwares and how we use them. The plugin tools replicating the analog synthesis paradigm with traditional paths (osc,vcf,vca,lfo) can be still considered as stone age, so there remains a lot to be done.

To finish, could you share some resources that have been useful in your research and process? any artist, book, work, etc you could recommend to people interesting in this kind of field.

First of all, I can offer the link of my Ph.D. thesis however it is in french only. The bibliography section includes an immense amount of resources. All the composers I have studied with and worked with did have an influence. I try to be active on different genres of music and I try to follow up many happenings, scenes and disciplines. But also I step back just from watching it to a self isolated mind state for drawing some consequences, where the inspirations become realistic and utile. What you do as the concept of “infinite grain”, I like a lot which gives a personal touch on the subject, clearly exposing a passion on it and open-minded rather than filled with institutional clichés. Cross referencing is very important and even with your own being, verification. But don’t be afraid of deceiving yourself and streaming to unexpected directions.

Last but not least… what’s coming next? Are you working on something right now?

Of course many things on mind, from iPad version to the development of the first cybernetics engine driving a sonic renderer in plugin format. However as a tool Cosmosƒ already offers an overwhelming amount of features to the user. One could spend a lifetime with constant development but a professional product takes significant more time to design in comparison to a personal experimental tool, and generally there is time needed for the music business to absorb the innovations and create a demand for it.

Interview conducted by Miguel Isaza in June, 2015

Miguel Isaza M

Listener, speaker.