Music Technology and Musical Expression

.

An Apology

First of all let me apologise for this talk - as that is what it is. It is not a formal paper as I have simply not had time to deal with all the issues in their proper fullness. Many of the ideas expressed are just that - ideas that are still in the process of being formed. Many are insufficiently researched, or even worse, not researched at all. One of the reasons for this is that the subject itself involves so many disciplines that it may take many years to come to an even partially complete understanding of current work. Another reason is that, as I will mention, the ideas are necessarily of the present. They make use of technology which is only now becoming commonly available and ideas which are based on often highly advanced use and understanding of that technology. As I shall argue, one of the main points is the importance of the common availability of the technology. As I shall try to explain, the expressive use of technology in music necessarily touches on areas as diverse as psychology, artificial intelligence, physics, neurology, virtual reality and others.

What I have is essentially a series of questions. I am in the process of attempting to answer these, or at least of attempting to confirm that they are indeed the questions that need and are worth answering. One of the main features of this, and I'm sure of other investigations into these areas is that it is often far from clear which questions will be fruitful, which barren, and which simply lead to another set of questions in which the answers to each are as elusive as the answer to the original.

1. What are the problems with Music Technology? How, if at all, do these relate to other technologies, such as virtual reality, culture and entertainment?

Advances in technology over the last forty years or so have been so substantial that to make many rational judgement as to the lasting value of the resulting technology itself very difficult indeed.

Views concerning what is called the new technology range from the highly and dogmatically opposed, to the highly and dogmatically in favour, to the pragmatic. Beliefs range from the absurdly optimistic to the absurdly pessimistic, from the ambivalent to the unimaginative.

It is worth remembering that these advances have taken place throughout society - almost everyone will have been touched by the in some way or other, but most affected areas apart from music are in those involving information processing or communications, including most obviously the internet, games and graphics environments and commerce.

Evaluating any long term affects of these developments is also because they have occurred at the same time as social, political and economic changes - it is difficult to know how these events are related, if at all.

How have traditional skills and ideas whether musical or not been affected by all this; how will they be affected, if at all? There are those who would suggest that there have been profound changes in the nature and emphasis of culture, others who would suggest the reverse. Some would say that the most profound changes are yet to come, others who would deny the depth of these changes.

In terms of music, there are many potential ramifications - I will be investigating the effect of technology on the nature of composition and performance. In all these circumstances, one question remains at the centre - how do 'new' electronic instruments relate to traditional ones? What are their advantages and disadvantages, and more importantly, perhaps, what do these qualities mean for the future of musical expression?

2. What, if any, real differences are there between the 'old' and the 'new' technologies?

Many take it for granted that an electronic musical instrument is fundamentally 'the same' as a traditional one. People may now be 'taught' electronic keyboard as well as, (and presumably in a different way from) the piano. Over the past few years all sorts of 'new' instruments have emerged - usually instruments attempting to make otherwise traditional designs more compatible with MIDI standards, but many others attempting to widen the expressive and sonic range of their traditional counterparts. The simple fact is that as of yet none of these instruments has had any significant effect on the 'traditional' range of instruments, with the possible exception of popular music where such distinctions are judged in different ways. Even in the latter area, while there are many groups of users who idolise certain manufacturers, certain instruments, certain configurations, certain hardware types, there are still no single sets of standards. There are possible exceptions to this - the electric guitar, which is omnipresent in many popular cultures, and the drum kit, again omnipresent but with rather a dubious claim to technological status as we are currently discussing it. The 'synthesiser' is a very good example of this. We may say that we want a 'synthesiser', but what do we mean by this? Most synthesisers are collections of sound producing hardware, with, possibly, a number of preset 'examples' that we may use. All the hardware and examples will be proprietary items - the copyright or patents held by one company or another. Does it matter that, for instance, by specifying a synthesiser but without make, preset or specification we are leaving so much to chance. Would the epithet "strings" or "electric piano" help?

The Organ

The organ perhaps comes nearest to the synthesiser amongst all the standard acoustic instruments. It is most similar not because of the way in which it creates its sound, but in the relatively complex and mechanical way in which different sounds are produced. In addition, the details of the construction of any particular organ are dependent on the location and maker of the instrument and, as with the synthesiser, there are no absolute specifications exist. Quite clearly, however, in spite of these features, the organ is clearly an instrument which is capable of musical expression equivalent to that of any acoustic instrument. However, the organ's status is also one that is different from any other instrument. Many composers have been highly complimentary about the organ. Mozart called it 'the king of instruments'. Berlioz, in contrast to the following quote, also called it 'the pope' as opposed to the 'emperor' of the orchestra.

John Deathridge, on a recent radio programme, said:

The problem with the organ I think is its power complex. Berlioz said it all - he said it was a jealous and impatient instrument that can't put up with any other instrument. I also think it's expressively limited - it's good at lofty sentiments but not so good at humbler ones. I also think the repertoire for the organ is very limited - it's very large but the amount of good music written for the organ is not very great.

In response to the accusation that there isn't very much good music written for the organ, the eminent organist Gillian Weir made this hardly robust defence:

No, there is a great deal of music written for the organ, but the problem in the twentieth century is that the music arose from many different traditions and many different countries…and to try and play it all on one organ or one kind of organ is very difficult. The music is so much married to the sound of the particular organ…It sounds like special pleading but it isn't really if one takes as the advantage of the organ its tremendous versatility and the fact that you can change… the very soul of the music… then you can see that in order to get to know the repertoire you have to have the right kind of organs.

There does seem to be a feeling that, at least currently, the organ needs supporters, and interestingly, many of those supporters themselves seem to feel the need to apologise for the organ's 'advantages' - its versatility, grandeur, as well as its most obvious disadvantage, that each instrument is unique and uniquely 'tied' to its location. Not only is the above quotation hardly a ringing endorsement of the organ's universality - one can hardly imagine the same being said of an oboe - it also points to specific difficulties that will recurr later in this talk.

Of course, synthesisers have been known as 'electronic organs', reflecting this similarity. Interestingly, the acoustic organ itself has never become a part of the general music environment, perhaps because of this lack of standardisation. Even when a composer uses the organ within a concert environment, they most often require one of a small selection of 'typical' organ sounds. The details are usually and understandably ignored.

What is an instrument involving an 'old' technology? - A standard instrument such as a flute, oboe, etc. using physical processes manipulated by a human being to create an audible result. The usual physical processes are through the periodic excitation of strings, membranes, solids, or columns of air.

What is an instrument involving a 'new' technology?

The main criteria we will take as being instruments using electricity in order to create, amplify, or otherwise modify a sound.

In this sense, the main instruments involving 'new technology' could be divided into the following groups:

  1. Instruments which emulate 'traditional' instruments in the manner of their sound production (and maybe include various methods of electroacoustic alteration). In most cases this restricts to some extent the nature and variety of the sound created. It would include the electric guitar. Ironically, in the case of the latter, many of the methods employed to alter the sound are included in order to overcome the restrictions imposed by the guitar-based interface itself.
  2. Instruments which use analogue or digital synthesis to create/express one central sound which cannot be radically altered. Such instruments are usually restricted and so are unusual nowadays, but would include the Theremin, the Ondes Martenot and any number of 'novelty' instruments.
  3. Instruments which use analogue electronics to create/emulate sounds.
  4. Instruments which use digital methods to create/emulate sounds.
  5. 'Instruments' which use digital/analogue methods to operate and are not designed specifically for musical ends but which may be used for this purpose.

It is worth bearing in mind at this point that the latter two groups of instruments are effectively computers, although (4) includes computers whose hardware construction is aimed at sound creation.

In most cases manufacturers choose to use an interface that is similar to a standard instrument - most commonly, a keyboard. Even if something like a keyboard is not built-in, it is usually included as an optional extra.

This feature is worth thinking about a little more. Why is the keyboard considered such a standard interface? Why do instrumental manufacturers so commonly use the keyboard as a standard interface? It is the case that using current digital technology there is absolutely no link between the keyboard - designed, after all, to enable a certain mechanical process - and the methods of producing or reproducing the sound. The fact that a keyboard is seen and used as the 'lingua franca' of musical interfaces does have an effect on the way that music is created and the way that sounds are made and used.

One of my arguments is that all musical instruments, apart from the voice, are technological. In one sense the complex mechanical processes involved in operating a harpsichord or a bassoon are entirely equivalent to a modern computer, although there is a difference in scale. However, there is also, potentially, another difference that the new technologies allow that I think does differentiate these two 'groups' of instruments. I shall come to this later.

3. When is a standard instrument not a standard instrument?

The majority of standard instruments are not protected or owned by copyright or patent. What defines a violin or a piano? Like consciousness and time, it's something we know when we feel it, but when asked to explain the definition is more elusive. Over the years and in the light of experience and differing practical requirements most standard instruments have - pianofortes developed from fortepianos, harpsichords, claviers, etc. Trumpets developed valves, wind instruments developed complex machinery for managing the implementation of anomalous tones (many of which, incidentally, like the clarinet's break, exist to this day and form an integral, if arbitrary part of the instrument's character), and this machinery itself has developed over the years. For some time, many standard instruments have apparently remained very similar, and developments in the physical structures of the instruments have in general not occurred. Developments have not generally been planned, and where attempts have been made to plan them, these developments have often been unsuccessful. There isn't time here to discuss why this is or whether this in itself has had any effect on music in general, but we can mention some of the potential effects. This century, instead of changing the structure of the instruments themselves, (although there have been usually unfruitful attempts at doing this), temporary modifications have been made to the structure or to the methods used to perform on the instruments. For instance, pianos may be prepared, the body of the instrument may be struck, the strings plucked, muffled, beaten, scraped. Wind instruments may be muted in a variety of ways - with specialist equipment, hats, cloths. Performers can hum into them, blow into them, strike the keys or valves, only partially compress the valves, create multiphonics. Stringed instruments can be played ways never originally intended - we may bow behind the bridge, on the neck, we may knock the body, bow the body, use mutes, harmonics, completely retune the instrument. We no longer have to play them in a concert hall - we can play them outdoors, in another room, broadcast the results from helicopters, play them over walkie-talkies. With the introduction of commonly available electronics we can amplify them, chorus them, distort them, modify them in an infinite number of ways.

But even with all of these modifications, are we actually changing the instrument? At one conference, we were asked to look at a piece of notated music and to imagine what it would sound like when performed. As it appeared to be a fairly ordinary piece of stylised, quasi-classical music that is what I imagined. On hearing a performance of the notation, we were all, I think, surprised to be confronted by what sounded more like a gamelan orchestra than a piano. As would be obvious to me now, this was a piece for prepared piano by John Cage. And yet, in spite of the fact that the very sound of the instrument was so radically different from what was expected, how many of us would deny that the piece was for 'piano' - even if prepared? Similarly, many modern pieces making use of the above effects, no matter how alien the sound world created is from the 'original intention' are still written for 'standard' instruments.

In a similar, but more subtle way, as young composers, we tend to make the mistake that all instruments have one particular sound, in spite of the fact that we can quite clearly hear that a piano’s low notes are radically different in almost every respect from the same instrument's high notes, and that this disparity is even greater in the cases of many other instruments.

All these arguments seem to me to demonstrate clearly that our understanding of the very concept of a 'standard' instrument is a lot more complex than we would normally assume. In fact, they would tend to imply that a 'standard instrument' is not simply a structure for creating a particular type of sound, but a body of data that includes performance techniques, an infinite range of sounds based on and limited by the physical structure of the instruments (including any possible additions), the performance situation and the ability and imagination of the performer. In this sense, the question remains unanswered - how far can you disable, modify or in any other way tinker with an acoustic instrument before it ceases to be that instrument. Incidentally and ironically, however, according to many, there is one simple and easy way to make an acoustic instrument into something else - record it and play it back and it turns into electroacoustic music.

4. Is there a fundamental difference between a performance on a standard instrument and a 'performance' on a computer or synthesiser? How does a piano, for instance, compare with a computer as a musical instrument?

This apparently simple question is of major significance in my argument. What happens when someone learns to play a musical instrument - or for that matter, learns to manipulate any arbitrarily complex physical object in order to achieve a arbitrarily complex result?

In a musical sense, someone learning to play a musical instrument must undertake a series of activities each of which can be assigned to one of two broad categories - practical or technical, and aesthetic or expressive. Many activities can be assigned to each.

Typically, we might assign to the former anything which pertains specifically to physical ability - the use of exercises, scales and arpeggios to enable manual dexterity on the piano, for instance. In this case, the musical material involved will be of little or no aesthetic value and few would make this claim on its behalf. Instead, there is an acceptance of the need for physical fluency if the subject is to be able to make full use of their aesthetic abilities in performance.

Incidentally, one of the 'deep' questions here is how much interaction there is between the mind and the physical processes described above. Our understanding of the very nature of this interaction is in no way clear at present. There is evidence that repetitive 'physical' processes physically alter the state of the brain, making any clear distinction between physical practice and mental 'ability' difficult to define. At the same time, there is some confusion as to the nature of a 'conscious act'. Experiments have shown that brain activity occurs some time before we actually make a physical even if we think we have only just decided to make it consciously.. In other words, there would appear to be a time delay present between our brain's physical activity and our consciousness which is entirely contradicted by our own intuition. For instance we do not experience delays when we speak to each other, even though our conversation would appear to be a conscious, self-controlled act (in most cases).

See page 568, 'The Emperor's New Mind' Roger Penrose

Neither is there felt to be a need for a particularly deep understanding of the physical aspect of the instrument itself by the novice performer. There may well be a fairly vague and non-technical understanding of the construction of the instrument, but understanding of the relationship between the construction of the instrument and the resulting sound is usually considered unnecessary, or even damaging.

Another factor is the importance placed on the understanding of the musical text from which a performer is performing. While it is generally considered valuable for a performer to understand in depth what is musically happening is a piece, what, if any, are the real benefits or advantages over a performer who understands the notation, even the overriding aesthetic involved, but not the detailed musical syntax? This is another fundamental of artificial intelligence - can someone, or something, be taught precisely which notes to play, in which order, with what force, etc., without an understanding of what they are doing musically. Would they pass a musical Turing test?

One of the main features of any standard acoustic instrument is that there are a limited number of physical parameters to control. With a piano, one has however many keys, two or three foot pedals and potentially some other physical aspects of the piano's body to control. In the case of a stringed instrument, things are a little more complex as one has a bow to control as well as the strings and body of the instrument itself. A trumpet has a mouthpiece, three valves and a number of tuning slides for performing minor tuning operations. Of course, we also control our own bodies while controlling these parameters, but are these physical operations a part of the instrument?

By any account, there are not overwhelming numbers of elements to control - it is a part of the very design of these instruments that in their development over time, they have become what they are in order to idealise the balance between ease of use with flexibility and depth of expression. That is not to say, of course, that the range of expression is limited by this limited range of controllable parameters - as we shall see, there is every chance that it is this very weighting in favour of low numbers of fixed parameters which potentially gives the standard musical instrument its potential for depth of expression. Moreover, we are all aware of the differences between the instruments in this regard. There are many more lower level examinations in stringed instruments, reflecting those instruments' complexity in obtaining any adequate sound at all. This is less difficult with wind instruments and lower grade examinations are often absent to reflect this. The piano is a slightly different matter - its sound is very easy to obtain, but the physical and mental abilities required to coordinate the body into controlling the piano satisfactorily are very much greater than many other instruments.

There is evidence that, as mentioned above, for conscious acts of which we are unfamiliar there is a paradoxical delay in brain activity and our perception of action. This may well imply that our idea of consciousness should not be unified. It is quite clear that a skillful instrumentalist does not need to continually delay performance while a conscious decision is made to play a particular note. Equally, however, it is quite clear that another performer unskilled on the instrument but with the knowledge to distinguish the correct note, often needs such a delay. One might suspect, therefore, that at some point, or at several points in the development of a talented instrumentalist that the ability to play a particular piece or in a particular manner achieves a level that is conscious, but where many of the more basic processes are dealt with 'automatically', and as mentioned above there is evidence that continued and repetitive actions can have a physical effect on the structure of certain parts of the brain. See Roger Penrose's book The Emperor's New Mind for an account of this.

If we now compare any of these standard instruments with an electronic instrument, we have a different situation. One of the much vaunted benefits of electronic instruments is their very flexibility in terms of sound creation, and the very lack of wide spread 'custom' electronic instruments based on any performance design other than the keyboard or guitar seems to imply that amongst the general population this flexibility is very important - presumably one reason why the Ondes Martenot, Theremin and the Melotron are not everyday musical instruments. In other words, there is a requirement for a standard and well understood interface (keyboard, guitar) in order to control an otherwise highly flexible and non-physical sound producing machine, even if those interfaces are themselves irrelevant to the actual sounds produced and can be said from some perspectives to be misleading. We commonly think that the electronic instrument somehow 'automatically' produces sounds divided into tones and semitones, whereas just as with many standard instruments such as the violin, this is actually far from the case.

However, an electronic instrument is not like a violin in that it is not physically connected to a single configuration of sound producing materials. Of course it is possible to create using an electronic instrument set in a single configuration - this is the 'standard' way of using it, mirroring standard instruments. But the idea that we might 'make' this configuration and then render the instrument incapable of any further radical change would appear to be contradicting the very point of that instrument's design. According to the above argument, however, it is just this very unconfigurability that allows a performer on a standard instrument to develop the unconscious physical and mental abilities to truly 'perform' on that instrument - it is the fact that such instruments have such a limited set of controllable parameters that allows the performer to control those parameters with depth, detail and speed (and so without consciousness or delay).

As has been said above, a digital electronic musical instrument is in reality a form of computer. In recent years the cost of computers and computing power has fallen dramatically. Only five years ago, CD quality real-time synthesis was an impossibility except in generally rather limited ways or using very expensive workstations. This is no longer the case. This is not the place for a survey of available audio soft and hardware, but generally, digital facilities that were previously only available to a small number of specialists are now available to anyone who wants to use them. Previously highly skilled operations, such as magnetic tape editing, have been comprehensively replaced with digital editing on a computer workstation. This has led to an enormous increase in the quantity of electronic material available although much of this material is not of high quality, probably reflecting the greater accessibility of the medium in general. The additional impact of the internet, mainly focussed around the publication and distribution of material has yet to be assessed, but all indications are that technology in general will have a very profound influence on the way we approach music in the future.

But can a computer be a musical instrument? I would imagine that a common answer to this would be that it could - just as it can 'be' a word processor, a communications device, a scientific analytical tool and so on. However, as has been mentioned, a digital electronic synthesiser is effectively a purpose built computer. A computer, then, is a non-purpose built mechanism, or rather, it is built for the purpose of being multi-purpose. Just as the main point of a synthesiser is that it is flexible in certain ways, so it is with a computer. Accordingly, one could argue that the same problem occurs here as happens with the synthesiser. The very flexibility that is not just the computer's advantage, but its very point, separates it from being a 'standard' musical instrument.

How serious is this problem really? It is certainly the case that people are perfectly able to become extremely fluent and expert at programming or operating computers, and it is usual for those people to specialise in certain fields. Would this not be the equivalent of learning a musical instrument? If one knows one particular programme, one particular platform intimately, is this not the same thing? My hypothesis would be that it is not, and that, more importantly, it suggests that in principle it is not, and therefore can never be. It is based on the proposition that

A standard instrument is limited in terms of its number of controllable parameters, but infinite in the performer's physical and mental ability to control them. An electronic instrument is (in principle) unlimited in terms of its number of controllable parameters, and because of the very implications of this, is limited by the performer/programmer's physical and mental ability to control them.

Because a computer is by definition non-standard, due to its programmability, it is forever within the performer's or composer's gift to alter the very structure of the instrument he or she is performing on or composing for. I would argue that this makes all programmable machines profoundly different from standard acoustic musical instruments. By definition, the 'instrument' is a 'subset' of the machine, in a way that is simply not the case for an acoustic, physical instrument.

The Organ

To return to the organ, how does the above definition affect our appreciation of this instrument. I see it as the exception that proves the rule. In one sense, it can, in some of its more impressive manifestations, be seen as a first, pre-electronic version of the synthesiser. In this, it seems to at the very least suggest that the desire for an all-purpose musical instrument has existed for much longer than the development of electricity. At the same time, of course, even at its most complex, because of its non-electronic, mechanical nature, and versatile although it is, the number of parameters under the control of any particular organist is still quite limited - in clearly in no way compares to a digital synthesiser or computer - allowing the possibility (in the largest and most elaborate cases probably pushing the dexterity of the organist to the limit) that a musician may learn a particular instrument in some detail. However, it is clearly the very differences between instruments that seems to be a limiting factor, even accepting that these differences are often only of degree rather than of a fundamental nature - any non-electronic organ achieves its sound in the same fundamental way. Clearly, as the above quotes also indicate, even supporters are quick to weigh against these criticisms the fact that the organ has flexibility and make a claim that this very flexibility makes the organ a special case. I would agree with this, with the addition that it has now been joined by another special case - music technology.

Virtual Reality

What about virtual reality? Presumably it is not inconceivable that at some point in the future it will be possible to build a 'virtual' acoustic instrument. Views are divided on the possibilities of this and other applications of virtual reality, but hypothetically, would not a virtual piano, or a virtual flute, whose physical reality we were unable to distinguish from the 'real thing' not disprove my hypothesis? Perhaps, and there are many in the field of artificial intelligence who would consider it a certainty. But there is still the objection that as long as the user had control over his or her virtual reality, the possibility of 'editing' the instrument would exist. If you wanted, for instance, to compose a piece in which a violin had six strings, or in which the performer had an extra hand, or was able to configure a 'virtual' hand into physically impossible formations, what would stop you from doing it? Composers have been pushing the limits of standard instruments for many years, and one might even argue that the history of composition and performance in the western sense involves in large part in pushing the limits of the possible. What else are certain works of Wagner, Mahler or Stockhausen other than attempts at pushing the practical limits of the orchestra? A significant element of the Bach violin partitas is his ability, through the performer, to obtain effects that were not normally feasible. Over time, these then have become a part of the 'standard' repertoire, just as preparing a piano is not now considered a particularly revolutionary act.

Unless our virtual performer had similar limitations to those that are imposed by physical reality on acoustic instruments, and unless those limitations were similarly impossible to overcome, I find it impossible to believe that digital electronic instruments can ever be 'performed' in the same way. Nor, I would argue, will it ever be possible to achieve the same subtlety and nuance as is and has been achieved by a highly talented 'acoustic' performer. And if these limitations were somehow put in place, what would be the point of the virtual instrument? If one of it's fundamental features were that its fragility, its limited scope for repair, quality, or adaptation were 'hardwired' into the virtual reality, it would defy the very point of its existence.

Actually, there is a circumstance where this may be a practical issue, as it would involve considerably less time, expense and trouble, for instance, to allow members of an orchestra to carry around their 'virtual instrument' on some compact storage device to be activated in the concert hall, but even this would, surely, require an impossible feat of discipline on behalf of composers and performers - is it possible that no composer or performer would attempt to experiment with his or her virtual instrument - in other words, to turn them all into synthesisers?

None of this is particularly new - most electroacoustic 'performances' - especially those created live have to prepare in advance precisely because of these problems. Currently, the 'instrument' must be meticulously pre-programmed, automated processes must be set up - in fact, the 'instrument' itself must be constructed. What is more, due to the very nature of the electroacoustic medium and environment, where technology is continuously changing and knowledge and experience advancing, there is often a clear pressure to ensure that any configuration used previously is not used again, (presumably, again, at least partially because the nature of the machinery is that it is not set but configurable and that currently such a large part of the process of composing and performing electroacoustic music involves the technical, practical side). There is, however, one area where virtual reality must, in my opinion, eventually play a role, (in fact it already does so), and I shall return to this briefly at the end of the talk

5. Is there a fundamental difference between composing for electroacoustic instruments and composing for standard acoustic instruments?

The arguments above alone must draw us to the conclusion that at the moment there is a fundamental difference in principle between composing for these media. When I write a piece for piano, I am not only writing for a particular structure playing a particular sound, I am using, perhaps subconsciously, a large set of data associated with that instrument, and this data must include one's experience and understanding of performances and compositions of which one has knowledge. To this extent, and perhaps rather paradoxically, one can argue that Cage's development of the prepared piano can be described as a change in the physical structure, or at least the physical potential of the piano, while the piano remains a piano.

But there is another, possibly more substantial area of difference between these media. Although over recent years there has been a considerable increase in the use of 'live' electronics, brought about mainly through a substantial increase in the 'bangs per buck' available for any given technology, the majority of electroacoustic music has been 'written' for tape performance. This is because, in the main, it is too difficult or impossible to undertake a 'performance' of the piece in real-time. What exactly does this mean, anyway? If I use a process that transforms one five second sound into another five second sound, and that its complexity is such that it takes ten seconds for the effect to be achieved, it would appear that in principle the sound is 'unperformable' in real-time. This logic is rather circular however, as one might just as well say that a performance of the sound is the playing of it once after the method by which it is created has itself been created. We would not expect a flautist to manufacture their instrument at the start of each concert.

Actually, according to the most extreme proponents of artificial reality, this is not in principle a problem, as it is in principle possible to interrupt the nerve impulses to the brain to the extent that our perception of time is controlled. Using this hypothesis, the above five second sound is created and output however many samples at a time while our brains are taking input. If a delay is required (the computer would judge this), our brain's real-time input facilities would be suspended until the next samples became available for playback. In this way, our actual perceptions of the sound (or for that matter any artificially created phenomenon would appear to us to be continuous and in real-time).

However, this argument has the implication that a pre-prepared sound has an equivalent 'status' to a sound created 'live'. Is there a fundamental difference between these two types of event, or are they effectively the same?

The manufacturers who have produced and sponsored the popularisation of MIDI would presumably argue that a pre-prepared sound (whether recorded or synthesised) is indeed practically equivalent. Indeed, in one respect the whole area of digitally recorded sound is couched in the language of something being 'life-like' and 'indistinguishable from the real thing'. Practical musicians have felt for some time threatened by the spectre of 'virtual music', composed on and performed by computers of one sort or another, and very frequently used nowadays to replace the time, effort and expense of arranging for performances or recordings of 'live' musicians. Especially when attention is deflected in some way, as with a moving image, many people find it difficult or impossible to distinguish between the two. Here, to some extent, we return to the question posed above - will it be possible to create virtual instruments that are indistinguishable from the 'real' thing? But there is an additional element here - recording and reproduction.

As was mentioned above, most electroacoustic pieces are composed 'with the help of' some form of recording, usually in order to facilitate the hearing of events otherwise impossible to achieve in real-time. Here, a number of questions arise -

Is it a given prerequisite of electroacoustic music that many or most sounds should be 'impossible' to achieve live?

Is this an excuse to avoid live performance?

Is live performance necessary, or in some circumstances even detrimental to a piece of notated music? (cf Glenn Gould)

If one were to attend a performance only to discover that the performer had pre-recorded the concert and 'mimed' would one feel disappointed? What if one didn't know? Why should the performer even bother to mime? If one knew that the performer had freshly recorded the concert directly before the 'concert', in the same room, on the same instrument, (perhaps with the same audience except, of course, you!), would that ameliorate you?

I don't think that it is possible to answer these questions at the moment. They merely put into perspective some rather difficult ideas. I think that, before electroacoustic music advances much further, however, they will need answering. I think, in addition, it is worth pointing out that different answers may well be applicable to different people and in different situations. It is already the case that in much popular music, material is recorded in advance and simply 'replayed' - even during an otherwise live performance. It is also true that such practices have generated some criticism, but this hasn't stopped the practices from happening. This would suggest that a substantial proportion of the public is not particularly concerned about this - and would perhaps be happy to sacrifice elements of live performance rather than lose the chance of seeing (as opposed to hearing) their favourite 'performer' live. Even in these cases, however, there are limits - a few years ago an 'performer' known as Milli Vanilli was disgraced when it was revealed that someone else had performed instead of him on a recording released in his name! One is tempted, with the use of mime and other forms of electronic enhancement, to ask what the problem was…

6. Is there a fundamental difference between a live performance and one assembled on tape and then played back 'live'?

There seems to be one simple and glaring answer to this question, although at first it may seem too obvious and straightforward to be of any help. Clearly, the most obvious difference between a live performance and one prepared on tape and then played back is that the former will be different each time it is performed, whereas the latter will not, unless it is recomposed in some way for each performance. Even in that case, there will be no 'differences' made during the time of the performance (unless, of course, a method were found to do so). Is this important?

As has been discussed at length above, there are, in any case, clear differences between the 'instruments' involved. The entire point of an acoustic instrument is that it should be heard live. The interface is designed in order to aid this - a small number of performable parameters that can themselves be infinitely varying. The way music is written emphasises this - the information given in a written score is tiny in comparison with the information provided in a live recording - witness the difference in file sizes between a music publishing score file and an audio recording. This information 'missing' from the score is provided by the performer. Note that the quality of this information is entirely undefined. One cannot include information in the score that guarantees a good performance - this presumably, is what in one sense is defined in an audio recording. However, the information given in an audio recording cannot, in detail, be edited as yet - and nor are there signs that it will become any more possible in the future. Even if it were possible to edit such a recording - which might, indeed, solve some of the problems mentioned here, what would we have but another form of virtual reality?

7. What makes an 'interpretation' of a live performance? How different is one from another - how much difference, and of what nature, is there between interpretations of one performer to another? Can this be interpreted as simple 'unpredictability'? How might one described in detail this difference?

One of the fundamentals in this area is the phenomenon known as 'interpretation'. Above, the interpretation of any given piece begins when a performer begins to learn the music, and continues to develop as any technical difficulties are ameliorated. With continued practice, familiarity and experience, and because the number of parameters available for manipulation on any acoustic instrument is strictly limited, the performance becomes increasingly involved with matters of interpretation. In general, most musicians would see the interpretation as being the most important element of the musical process. It is because the interpretation occurs 'live' that the results of each performance are more or less different. And clearly, it is when the performer achieves such a degree of control over his or her instrument and a certain degree of familiarity with the music to be performed that the 'best' interpretations can occur, although of course factors such as physical and mental well-being can be important.

8. Can there ever be such as thing as a 'standard' musical instrument that relies on technology or does one - the computer - currently exist?

Based on the arguments above, I do not believe that there is a 'standard' musical instrument based digital electronics and not could there be until the technology of virtual reality advances quite substantially. Effectively, this is the same question as: is it now, or will it ever be possible for a digital electronic instrument to have an equivalent status to a standard instrument? It's also possible to relate it to the question - will it ever be possible for a 'virtual' performance to be indistinguishable in every possible respect from a 'real' performance. As this would have to include a large element of artificial intelligence, for reasons given above, it is not possible to answer this question at present. There is currently considerable controversy surrounding these issues with opinions varying diametrically. At one pole is the view that it is only a matter of time until artificial intelligence and virtual reality are, literally, a reality. There may be differences of opinion as to how long the process may take, but ultimately, many believe that there is no problem in principle in achieving this end. Others take a more sceptical view, suggesting that there are considerable advances still required in very fundamental areas of our understanding of reality in order that these goals may be achieved.

To put this matter into perspective, in order to achieve a fully convincing 'virtual' performance, one would not only have to programme the set of likely outcomes of any individual performances, but the complete set of possible outcomes. An example of this might be a virtual performer who doesn't perform, for whatever reason, the material that has been announced; it may involve the performer leaving a performance early due to illness; it may involve the performer not arriving at the concert hall, or the performer's instrument becoming inoperable during the performance. Ironically, without the possibility of these event occurring, the performance cannot be said to be 'real'.

For a much more complete account of these ideas, see chapter 5 of David Deutsch's book The Fabric of Reality.

There isn't time to go into a detailed study of the live performance situation - most especially to investigate what a member of the audience might want from a live performance, but what are the elements of a live acoustic performance - how different would an 'interpretation' be from a simple application of a few randomly contrived differences. What is the nature of a performance?

9. If a composer develops a computer programme for composing music, who is doing the composing - and of what relevance are different compositions made by the same programme?

There are a few other points that are of importance here. For instance, when one writes for, or plays, an acoustic instrument, one does not, in general, have to credit anyone other than the performer and the composer. In general, one does not thank Steinway or Boosey and Hawkes for their contribution. And yet in electroacoustic terms it is fairly common, if not mandatory, for composers to list or describe the equipment and/or software used to create the composition. Whether or not this happens, it is not uncommon for interested parties to ask after a performance, "how did you do that?", "what did you use to do that?", questions that are not generally so appropriate after a performance of an instrumental piece. Why is this?

10. Is there a fundamental difference between music written for live instruments on a computer programme such as Sibelius or Finale, and music written for live instruments by hand?

Another question that I haven't time to deal with here is the role of current notation software in composition for standard instruments. It may, currently, be a fairly minor point, but there is potentially an interesting problem here. With increasing frequency, composers of music for standard instruments are not only producing work using notation programmes, but also composing using them. If used correctly, there would appear to be no reason why, in principle, this should not work well, and indeed there are some advantages - an 'instant' form of auditioning (although admittedly usually of unsatisfactory quality), the lack of a requirement to reprogramme a score completely, automatic generation of parts, etc. However, as more of these initial scores are produced, and while the ability to actually reproduce them 'live' does not, there is a clear indication that problems are arising directly because of the replacement of handwriting with software writing.

Complexity

(See Deutsch)

At this point I must admit defeat and simply include, with no further comment, three further questions which I have had no time to deal with, and are perhaps less important to the general argument. Still, I think it may be valuable to pose them:

11. What is the role of complexity within musical interpretation?

12. If a composer develops a computer programme for composing music, who is doing the composing - and of what relevance are different compositions made by the same programme?

13. Is there a fundamental difference between music written for live instruments on a computer programme such as Sibelius or Finale, and music written for live instruments by hand?

14. Conclusions, and a new form of music recording/editing?

I have a feeling that many of the arguments given above, disparate and disorganised as they may well appear, will give a profoundly negative vision of the capability of music technology to express musical ideas with the subtlety, grace and depth that we are used to from expert performers on standard, acoustic musical instruments. Personally, I believe this is indeed currently the case, and that the above arguments explain, at least in part, a real difference in our appreciation of music for these media.

However, there are many ideas currently in development which are attempting to break out of this impasse, whether or not those undertaking these developments would agree with my hypothesis or not, although the success of any one of them is far from certain. As in any area involving advanced technology, it seems to be a double-edged sword. Whichever way you approach the issue - if you are really looking at the problems in depth - you are faced with the 'reality' that ultimately, the only way forward in the development of music technology is in the development of artificial reality, and, therefore the development of artificial intelligence. This shouldn't be surprising if we take the view that many if not all human developments are themselves attempts to develop forms of virtual reality. As David Deutsch argues, is not the cave-painting of thirty thousand years ago an attempt at evoking within us certain states and emotions without resorting to the 'real' thing?. While it is certain that the lure of technology - the clear benefits that creating sound using this medium will continue and develop, ultimately, I feel, the future with the clearest advantages, will be in the hands of those developing virtual environments. It will be the properties of these environments that will, ultimately, decide how sensitive and delicate are the capabilities of musicians to truly manipulate music technology, at least in the same way as performers manipulate acoustic instruments. Although some may feel that this is rather a long-term, general and impractical conclusion, we should not forget that virtual realities already exist and have already existed for many years. Deutsch argues that the whole of mathematics and indeed science, as well as all arts are forms of virtual reality - albeit forms that do not often use technology. This rather nicely completes the circle around which I may be seen to be travelling. Deutsch suggests that, in terms of the CD, we are already approaching a state of true audio artificial reality. In this limited area he may be correct, but of course he is not taking into account the role of the performer. Later he goes on to describe virtual renderings of, amongst other environments, a game of tennis and a roulette wheel. Each of these environments, which Deutsch considers to be perfectly feasible to render in principle, are much closer to what I would consider those necessary for dealing with a true form of interaction between performer and instrument as well as performer and listener.

If I take a strictly musical example - currently we have two main methods of recording, replaying and manipulating sound. One is through MIDI, the other through digital audio recording. The former sacrifices quality of sound for size of data and ease of manipulation. The latter does the reverse. Presumably, what we would ideally like is something in between, where we have the quality of digital audio and yet are able to manipulate the result on a note for note, instrument for instrument, phrase for phrase basis, and this is clearly impossible at present. The nearest possible solution is to record each instrument separately and then mix them together. This would allow a certain amount of editing to occur on each part, although even this would distort the surrounding sound as well. Clearly, the real answer is to invent virtual instruments, which would be played by virtual performers. Effectively, this is what MIDI attempts to do - the MIDI data represents the instrumentalist, the synthesiser the instrument. And MIDI succeeds in some respects, although as has been discussed, it would not convince a music purist. What MIDI lacks is both subtlety in defining the action of a performer - the amse MIDI data will be replayed each time, and a real performer would introduce subtle changes - he or she would interpret the music, and the synthesiser lacks the 'real instrument' - methods of synthesis are usually based on algorithms far less complex than those produce by the union of human performer and physical instrument (if indeed this is even an algorithmic process at all). A truly unique 'recording' would be one in which each time the recording is played, a new interpretation is heard, not just at the highest level, but at the level of each individual instrument. A side effect of this would probably be, as the data for such a 'recording' would have to include definitions of each instrumentalist as well as each instrument, that one could reconstruct a visual representation of each performance. In other words, you would have artificial reality.

There is also a greater interest and emphasis now on all those elements of the medium which involve what might be called extra-musical elements - such as multimedia, computer generated composition, sound/multimedia installations, dance, as well as the inclusion of 'live' (in other words not entirely predictable) elements in a performance. If there is a challenge, it is in including these and other unpredictable elements in forms of music that are based of existing forms. The tendency is to accept that the medium must involve a substantially different approach (I believe precisely because of the ideas I have been outlining here) and that, therefore, there is no point in trying to find any points of contact other than direct adjacency - in other words, including a performer in amongst an otherwise almost entirely predictable background.

As a side benefit, the medium is also attractive to those who like having direct control over all parameters, and to those who do not like performance or performing. This is not limited to music technology, but to the area of computing as a whole, where young people (principally young men) are noted (and criticised) for communicating with 'reality' solely through a computer terminal. Personally, I'm not sure that the same criticism cannot be levelled at the obsessive musician who only feels comfortable when playing the piano.

In 'real' reality, we are at the beginning of developments in this and other forms of technology, at least what we think of as ‘new’ technology. We should pay attention not only to those who warn us against future developments, but also those who imagine what may happen. We should not be afraid of allowing ourselves to experiment with these ideas. There must be many ways of failing before success is found: to this extent the study and development of music technology is similar to scientific investigation.

Phrases like 'artificial reality' have acquired a sinister character through the work of some science fiction writers, film makers, politicians, sociolists, etc. We should not forget that CDs, synthesisers, computers, the internet, and even a live musical performance by a live musician on a real, acoustic musical instrument is, in effect, a form of virtual reality. Technologically, this performance is far in advance of what could have been achieved even 10,000 years ago. I think if developments in music technology prove anything in future, even, ironically, if much of our appreciation of music ends up being through truly virtual means, those means will need to take into account what we now think of as the interpretation of the live, human performer, and the information that he or she adds to the ‘score’ - in whatever form, paper, electricity, or brain tissue, that may eventually be.

Richard Hoadley, April 2000