Excellent point! In the context of early electronic music (thinking of Varèse), there was a sort of resentment for the fact a composer was limited to working with the sort of sounds that can be "drawn from horse hair and catgut" or by an octave divided in twelve or any instrument other than the composer's imagination. That's how I understood the criticism (which was also voiced in an academic context).
Great points. It's amazing how people seem to want to pay no attention to the whole field of aesthetics when they see a computer make some sounds. Computers can help in composition, of course, but a computer composing by itself is a whole different matter.
I find it interesting that your comment mentions that restriction in sound leads to better musical creation, and the comment you're replying to remarks that the best electronic performance they saw was one where all the music was composed of one sample
I think a lot of the dislike is from simply not understanding many of the techniques. E.g. in second example, there's a bit where the melody is extremely staccato, but the note length isn't achieved from the keyed duration but by a flurry of other percussive notes overwhelming the compressor/normaliser. I guess without knowing that it might seem uninteresting.
If you understand stuff like this you can appreciate the music. I'm not exactly going to sit down and listen to it in a dark room sipping some red wine, but it's a new and unique compositional technique that can be incorporated into other music.
Technology isn't what makes music modern. The points raised in the article go much, much deeper than that. This makes me think of the composer Conlon Nancarrow, who mostly composed for player piano. He was often credited as being a "father of electronic music", which baffled him. As far as he was concerned, he'd never composed a single note for an electronic music. But his music was the earliest attempt at seriously exploiting the ability of machines to play things humans could not play.
But synths and sequencers? They're not "modern". I find it hard to imagine music much more primitive than the boring wub-wub-wub of dubstep or the endless 4/4 oontz of Eurodance.
Serialization and 12-tone, for all their shortcomings, represented serious questioning of how notes should be organized at all. I think they failed because they dehumanized, and humanity is at the heart of music. A conscious disregard for consonance/dissonance isn't the same as choosing the balance of consonance/dissonance. They were interesting intellectual exercises, but as a musician, I think they deserved to die. They're the music of the past now.
Most of the real experimentalism in music I hear these days (and I have some bias here) is in the noise and free improvisation fields. I'm biased because I also play free improvisation. But undermining rhythm and harmony, and focusing on tonal evolution - that's interesting and daring. And that music is done with some seriously analog approach, not computers. Computers are generally very restrictive (although there are exceptions - the people doing million-note sequences are fascinating noise experimenters). One of the best modern music performances I've ever heard was an improvisation of one percussionist, who spent several minutes carefully wadding a piece of cellophane. It was complex, dynamic, exciting, and above all, musical - but it worked by tossing aside virtually every musical convention except "be interesting".
There's plenty of constraints in this style. You need to have very many notes. It must be a single instrument. It should presumably sound good or otherwise have some musical value. There is a lot of exploration possible within these constraints.
Most of the criticism in this thread are just eloquent formulations of "Get off my lawn" and "your a faggot". No different from the literally tens of thousands of previous instances of situations where a new artistic medium or expression has been criticised.
I don't agree with your premise. The composition as much art, often more so, than the performance. For most of the music I listen to, including the classical music, I want the performance to be a faithful reproduction of the intention of the composition. For sheet music, you need the performer to interpret, but if he or she interprets outside of well established norms, the piece will sound off.
For electronic compositions rendered directly to a sufficiently precise format (which MIDI is not), you need no separate performance - the act of composing it and performing it is the same.
Since I reject your premise, your conclusion is irrelevant to me, and I don't think there's any chance we will get any further.
I see from other comments that you imbue the touch of a human performer some special quality beyond the qualities purely physical sound generated, and to me that is pure superstition with no basis in reality. You might as well try to convince me fairies are real.
> Therefore a computer can not reproduce the essence of music.
Music that wasn't written on and for a computer, no. Yet it's perfectly possible to manually craft "variation of duration, velocity, loudness" for every single note of every single instrument -- just not by feeding music in standard musical notation into a sequencer unchanged! I agree that MIDI isn't very sophisticated, but it's hardly the last word of music written on and played back by computers. Just consider how young this all is! I'm pretty sure physical instruments and the songs played on them started out kinda simplicistic, too. And tribal music for example often isn't so much about expression emotion, but putting people into a trance-like state by endless repetition, and techno does that just nicely already. It's not my cup of tea generally, but I get the same out of chip tunes: I don't need sophisticated music, I just need a canvas for my ears and soul to draw on, I can fill in the blanks or dream up harmonies on my own.
> An interpret has to understand the emotions that should be transported.
True, but also
a.) it doesn't stop there. Beauty is in the eye of the beholder, and if a simple "gridlike" composition makes me sad, happy or gives me goosebumps, that's "soul enough" for me. Even the soul of a simpleton is still a soul :)
b.) the computer enables composer and interpret to be the same person.. and if they so desire, they can put endless amounts of detail and emotion into a piece. Personally I have no doubt that people like Mozart would have been all over computers as an instrument, and the wide range of expression they offer already.
>There has strangely been a long-term dream of computer scientists to replace composition. To "spit out" songs as someone put it. It's usually too scary to ask ourselves "why?" because it usually is a game of validating one's own mind against others' impressions. For some reason, auto-composition seems like some kind of holy grail, but of what?! Saving money buying music? A fantasy of abundance? A kind of "gotcha!" that a pure thought-person has outwitted a silly irl composer? What do you actually get for creating an intelligence that wins a Turing test? You certainly don't get sweaty friends deliriously dancing on drugs at 3 am. You typically just get another social promotion in the direction of aiding greater powers at their control over the world. Is that what it's about? Closing ourselves off from human musical expression in exchange for increased financial standing? Get a job bc you proved you can fool some of them of the time? To validate a work ethic that regards music as frivolous by demonstrating that it can be simulated accurately enough?
To me, music is music, regardless of the creator. If it sounds good, it is good, whether it was composed by a human, a program, or a combination of the two.
If your issue is that this generative music doesn't sound sufficiently good compared to a good human producer/composer, then that's fine. The rest just feels like some kind of weird projection onto my post that I don't understand.
Computer-generated music is not at all considered an interesting goal for financial reasons. I don't even know what you're trying to mean by that. I think it's interesting because:
- It's one of the areas where the best humans still greatly outperform the best programs
- I believe computers do have the potential to one day create excellent, artistic music
I don't align with your view of experiences, passion, cost to an individual, etc. I think Beethoven's music would sound just as good regardless of if he were deaf or not deaf or a mass murderer or anything else. I think art stands on its own, with the backstory just as interesting trivia for those who want to know more about its creator.
I'm incredibly surprised by the negativity of the comments here. Taking a medium, in this case a piano midi track, and pushing it far beyond its limits, while still resulting in something resembling music, is just about as close to hacking as you can get.
Also, constraints breed creativity, and this is just another example of that.
It's not the type of music I would listen to generally, but insisting that this should be the case completely misses the point of what's interesting about it.
There is a vast array of electronic music out there, that sounds nothing like traditional musical instruments. I don't know what techniques artists are using to produce these sounds, but why would you assume that they're not exploring timbre space effectively?
I don't get that mindset at all. A lot of the music I listen to is music that is not playable on an analogue instruments without severely butchering it.
Ranging from 8-bit chip tunes to much more complex electronic music.
Why does that affect the level of communication? To me, the only thing that is different vs a song is that for electronic music the communication is mostly from the composer. But I find that to be the case for most instrumental music, including classical music - a performer that adds so much "personality" to the piece that I notice will generally annoy me.
I wouldn’t say modern electronic music has a higher level of ‘sonic texture’ than orchestral music, or any music using traditional instruments. The difference is that in modern music ‘sonic texture’ is an explicit mode of “authorial expression.” The sonic textures creates by acoustic instruments are arguably richer, as they are capable of much more subtle modes of expression as playable instruments.
It’s just the case that the development of novel acoustics instruments is a whole separate craft, subject to the annoying vicissitudes of the sonic properties of physical matter. The instruments were developed over centuries.
As soon as instruments became electrified artists began using the ability to express themselves directly by manipulating the sounds themselves.
The sonic experimentation dominating modern pop music is entirely the result of the complete digitization of the sound generating chain.
I also think another factor is that digital synthesizers are woefully impoverished as instruments capable of expression through musical performance. Outside of the voice, modern pop is devoid of real-time musical expression. It’s become a non-real-time process, closer to writing, animation, the visual arts.
This forces the composer to rely on the native capacities of the instruments to express ideas, and the one that is completely unavailable in the acoustic realm is to chain the fundamental timbre of the instrument.
I’m a producer, recording engineer. The author of the original piece is missing that the only reasons a composer could imagine they were working primarily with the modes or melody, harmony, rhythm is that there is highly developed tradition of musicianship and instrument design to fill in the most fundamental aspect of music, which is the actual sound.
Edit: there is another huge factor in the decline of melody which is the product of two interrelated technology developments. The first is the use of loop based sequencing techniques for compositional work, and the second is that the random-access editing techniques made possible by modern digital audio workstations extended the loop based composition process to all sounds, including the voice.
Loops are basically short compositions. If you spend a lot of time in this mode or composition, your ideas will tend to be short. The actual mechanics of how the music is made disincline the composer from constructing both traditional harmony and melody.
The DAW has fundamentally disconnected music from the strict relationship with linear time that was inherent before the age of recording. To some extent musical notation allowed composers to work around this, but the end result was always an expression that had to have a thought out beginning, middle, and end.
I think he's being a bit too histrionic about EMI here. It produces good music, but it has to be highly "inspired" by a human composer first. Without the works of a genius artist to consume and process, EMI would not be able to produce anything lovely itself. It can only elaborate on a composer's style, not have one of its own. For now, computers still don't have real creativity.
I think the dissatisfaction you're talking about arises necessarily from projects that generate notes with a network trained on a bunch of MIDI data. Our ears expect music to have structure (chord progressions, etc) but models like this only really mimic small-scale features of the source. So any given phrase of 3-5 notes can sound musical, but over the course of a measure or two the illusion breaks and it sounds largely indistinguishable from a random walk of notes.
The electronic music point is good, but I wouldn't label electronic music as inferior or not art.
It's about choosing which parts of your stack are artisanal vs which parts are implemented for you. These choices impact the form factor, but I wouldn't say that they diminish the work itself.
What is your point? Are you implying that people thought electronic music was otherwise difficult or expensive to produce? Did you think people didn't know you could write code to generate sound?
...Or are you just being reductive and showing off your programming knowledge?
Maybe it's that music matters and even instrumental music contains a similar level of content and meaning as speech.
The minute we realise music isn't the authentic voice of another human being we reject it.
Even my enjoyment of arguably mechanical music like some of the early minimalist stuff - where to a degree processes determine the structure and unfolding of the work - is mediated through my awareness of the authorial intent of the composer. (I'm talking more about Piano Phase than Pendulum Music here - the latter is slightly too devoid of human intervention to be 'real music' for me)
How would you feel about a novel generator? Why is that so different?
Western tonality has some rules. Composers can operate outside of that if they want. With computers, a single composer could make up a new set of rules for every piece. You can turn a drum beat into a note by accelerating the wave form, invent your own scales.
It is interesting that even electronic artists tend to operate within the constraints of the western tradition. It is probably a struggle to find an audience once you start doing wacky things. Or maybe it will happen, but has not yet.
reply