Engineering and Music
"Human Supervision and Control in Engineering and Music"



Workshop
Orchestra Concert
Ensemble Concert
About us

Johannes Kretz

„Composing sound“
  Basic thoughts towards a refined  control of timbre in music

1. Making music is intentionally controlled creation of sound.

a. In composed music, the creation of sound is in a first level controlled by the composer, [who controls the behavior of the musician(s) with the help of a score,] and in a second level, by the musician, who has a restricted freedom and responsibility in translating the indications of the score into sound.

 i. The concept of notes in a score is an efficient, but therefore quite limited language to describe sound. The severe data reduction leads to ambiguity and leaves a lot of space for interpretation (and misunder-standing, too.)
ii. One has also to keep in mind, that a traditional musical score is even a compromise between a description of actions (like fingerings, hands and techniques to use etc.) and a approximate representation of results.
iii. The traditional concept of notes is quite well suited to represent pitch in time. The representation of intensity (through symbols like p or ff ) is simple and imprecise. And concerning spectral information (timbre) it can only specify the cause (which instrument), but not the result (the quality of the desired sound).
 iv. For all this reasons, it has always been the task of the performer to complement missing information by his understanding of style, taste etc. But the „shortcomings“ of score language have always been seen positively in that sense, that small deviations and even eccentric performing of a given score allow the musician to adapt his playing to the acoustical conditions of the hall, the psychological atmosphere of the audience. A certain degree of flexibility in the control of music has always been desired.
b. The interest in better control of timbre information, a development with its roots in the 19th century and a lot of progress in the 20th century, is one of the reasons, why computer music became a focus of progress in music today.
i. The immediate control of sound and its inner contents directly by the composer changes the way of composing and the concepts involved in the creative process, which even has a strong impact back on purely instrumental writing of music.
ii. For reasons of efficiency, (new) concepts of representation of sound are needed, also to allow a straightforward transfer of information from the composer to  the machine.
 iii. These concepts should allow the composer to express his/her ideas in a convenient way, but should also be suited for a direct translation into machine code for the generation of sound.

iv. In order to be as general as possible, this new high level languages describing sound in a sophisticated, meaningful and complete way should be based on the results of psycho acoustic research. (Which tells us – for example – that timbre is actually a quality composed by several parameters: percussiveness, spectral flux, brightness, harmonicity, etc.

v. Again, certain essential aspects of the creation of sound should be pre-defined during the process of composing, but other details should be left open in order to have a chance of spontaneous modification during performance.

1. (Even in improvisation, it is quite common to prepare at least some material, which is then unfolded during performance in various ways.)
c. According to a. and b. (and to my own experiences in computer music) two essential tools are needed for computer music. 
i. An environment for handling the investigation and creation of sound material.
1. Independently from the actual method of synthesis, composers need a interface to control synthesis or sound treatment in an efficient way. A way, that is strongly oriented on their personal way of thinking of sound, which allows to concentrate first of all on the aesthetical content of the music and less on the necessary technical details.
a. Therefore an open environment with a user-friendly graphical interface (like IRCAM’s  „Open music“ ) with the availability of „Libraries“ containing specific musical experience of various composers already coded into objects, that can then be accessed and interconnected by everyone, seems to be a very promising approach, because it allows both, a very independent individual way of working for each composer, but also at the same time the possibility to reuse other composers basic objects to avoid having to „reinvent the wheel“ all the time.
b. Experience shows, that the fields „control of sound synthesis“ and „computer aided composition“ are quite close, so often the same environment can be used in both fields. (Open Music, Marco Stroppa’s „chroma“, my „Klangpilot“ environment…)
ii. Secondly, to allow interaction with instrumental musicians and/or the audience , a performing environment is also required.
1. Experience shows, that it is possible to use a tape together with some musician(s) on stage, and that – if the performers know the tape very well by heart – the illusion of a spontaneous performance can be achieved.

2. But a more intelligent, flexible and open way of playing back, of performing the prepared sound material should allow the instrumental musician(s) to play more relaxed and also lead to more variety between different performances.

3. Since a few years software for live electronics - like „MaxMSP“ -  gives the possibility of modifying the „pre-cooked“ sound material in real time during performance.

2. In this sense, computer music can be regarded as having left the period of pioneers and prototypes, as a mature and refined mean of working with sound at least as precise as with pitches, time information and dynamics. Each epoch has developed and added its specific instruments, and this surely has and will have an impact on the musical content. Finally smooth navigation in  the field of timbre has become possible. An new land has been opened for creativity.