You are talking about (dynamic) compression applied to an entire song with the sole purpose of making it as loud as possible to the human ear but still complying to the max allowed peaks (or to the energy of the signal).
This does degrade the quality, sometimes heavily. Nevertheless it is done by mastering engineers (they rarely enjoy it) as well as by radio and tv stations extensively because of the psycho-acoustic fact that a songs appears to be better if it is played louder. This gives them an advantage over the competition: On average, people searching for a radio station are more likely to listen to your radio station if it is louder than the competition.
The main issue lies in the fact that the current peak measurement of audio signals does only marginally correlate with the perceived loudness and heavy compression is used to trick this system. The broadcasting industry is aware of this. An open and quite effective loudness measurement algorithm [0] has been introduced a few years ago and it gets slowly adapted all over the world by new broadcasting laws: AGCOM 219/09/CSP (Italy), ARIB TR-B32 (Japan), ATSC A/85 PRSS CALM Act (US), EBU R128 (Europe) and OP-59 (Australia). iTunes Soundcheck is also based on [0] and since this year Youtube applies this to newly uploaded videos as well [1]. Even games use [0] to keep their audio at a consistent loudness.
So slowly, the over-usage of compression does not give music producers and broadcasters any advantage anymore and beautiful dynamic music will be competitive again.
I have collected some links [2] about this topic. Because of the lack of any affordable implementation at the time I created one myself [3] with some additional notes [4].
That can't actually account for perceived loudness, which modern mastering processes do account for. The absolute amplitude of a song has always been fixed, by the technology delivering the audio. Radio, the primary medium for music discovery throughout most of our lives, has a very hard upper bound (set by both the technology and the FCC), thus a compressor/limiter is employed at the final stage before sending the audio out of the radio station to go up the tower and out over the waves. There has always been that sort of leveling going on.
Modern tools provide an entirely new dimension in the form of multi-band compression, digital phase alignment, etc. It is now possible (and being done in nearly every genre) to make a recording perceptively louder than other recordings by maximizing amplitude in specific bands (those humans are most sensitive to), reduce phase cancellation between speakers, and hype the sound (boosting high and low frequencies, which tricks the ear into hearing it in the same way as louder music...but also causes listener fatigue faster), often all at once.
Amplitude compression, even when it's smart enough to recognize that there's more activity across a broader spectrum as provided by SoundCheck, does nothing to restore the damage to dynamic range, natural frequency response curves, and "real" sounding recorded music. The music is broken by these processes...the listener has no power to fix it, other than to not buy it, and choose music that hasn't been mutilated in such a way.
As a former sound engineer, I agree with everything here. Just one correction 'sound check' doesn't add compression to a song but rather adjust (down) the volume of songs that are perceived to be louder. This is referred to as "normalization". Note: it does not adjust (up) the songs that are perceived quieter if they have peaks that are 100%... because they would clip. To get them louder would require a process of limiting and compression.
Firstly this is just a terrible article as from the heading "Digital Compression" onwards it deals with an entirely unrelated topic to the "Loudness Wars" meme mentioned in the title; this is confusing to readers. More on this section later.
The "Loudness War" meme describes a trend in popular music towards having decreased dynamic range in the time-amplitude domain. In other words, less difference between the loudest and quietest passages of a piece of music; a higher average volume over time. While I totally agree that this trend has occurred, it's negative effects have been largely overstated. The reality is that a lot of music produced decades ago had such a large degree of dynamic range as to make it hard to listen to in most circumstances. Modern mastering techniques allow us to rectify this without losing any of the power of the original piece.
Proponents of the "Loudness War" idea often cherry-pick specific examples of poor mastering where average volume over time has been increased using naive methods that do in fact have a negative impact on the sound. This does not accurately reflect the state of the art of audio mastering. Unfortunately mastering is often seen as a bit of a "dark art" due to the large amount of domain-specific knowledge required.
Because of this lack of knowledge, people are easily deceived by naively credible diagrams[1] showing one "skinny" amplitude-time graph and one "fat" one, claiming that the fat one is dynamically flat/ruined. However, this graph gives no information about the time-frequency domain, where much of the 'magic' of mastering takes place. The reality is that modern mastering techniques make much more effective use of all space available in each of the domains (time/amplitude/frequency).
In fact, many of these dynamic range management techniques are applied at the mixing stage of production (where all elements are processed seperately), before the mastering stage (where all elements are processed together). With modern tools and an understanding of psychoacoustics we can make pieces feel louder while preserving the shape and power of the various elements. It might make the time-amplitude graph look "fat", but that's hardly the whole picture.
Why the "Digital Compression" section is misleading: after giving us an almost uselessly simplified explanation of the digital representation of sound, it implies that it is common to encounter digital sound with a low resolution (bit depth and/or sample rate) however this is not the case, all CDs and MP3s one is likely to encounter have a bit depth of 16bit, and a sample rate of 44.1khz[2]. This is high enough as to be generally indistinguishable from higher resolutions to all but highly trained listeners.
It is well known that music media for "popular" consumption are often intentionally mastered with much less dynamic. This is called "compression", but has, on the surface, nothing to do with compression of bits* . It's the dynamic that's compressed, making quieter and louder sounds be closer together.
For markets with people that are more likely to care about sound quality, though, a much larger dynamic range is preserved. This is why the same album often sounds better on vinyl than on digital media[1]. It has nothing to do with the media, it's the superior mastering that was consciously chosen.
The Wikipedia article on the Loudness War[2] offers a good explanation.
* Well, technically, music with compressed dynamic has less entropy, so can be encoded at a lower bitrate without loss.
I respect your opinion, but it's only your opinion, not a global truth.
Compression is a style. There is far more to musicality and emotion than compression. The problem compression solves is that the environments where industrialized cultures now listen in are not dedicated listening areas, but alternately loud and quiet places, so compression makes all parts of the music almost equally loud so there are no drop outs where the quieter parts would be. There is no need to compress music in headphones, for example, to the extent that it is currently compressed.
I find compression and other techniques such as removing vocal breath sounds, makes most recordings unlistenable. They don't sound like humans anymore, but synthetic puppets animated by humans with conflicting values. Take the Foo Fighters, for example. They're popular, sure, but all of their songs sounds like one continuous din. Between the compression induced by the guitar distortion settings and the compression added to the recording, then the compression added by the radio station, it just sounds like a waterfall with a few bandpass filters changing between the verse and chorus.
Also their vocals have no dynamics. When he yells loud, the vocals don't get louder but the timbre changes. That changes it from cathartic to strained. The dynamics have all been flattened.
Why do you think the indie rock movement and bands and styles with wide dynamic range like the Pixies, Nirvana and dubstep got so popular? They eschewed the trend of hardline compression with alternating loud and quiet parts. They match the rhythm of human thought and motion which has fast and slow, detail and empty parts.
> That's like saying the internet is bad because there's porn on it.
Yes but on the internet you can go where there is no porn. Where can you find music with no compression?
> Compressing your track to hell in an effort to get it to sound louder doesn't work anymore because every streaming service will just normalize it anyway.
This is a bit of confused remark. "Normalization" means applying some constant gain factor to the signal so that the loudest level is 0dBFS. Compression applies a varying gain factor to the signal to meet some parameters. Normalizing audio will generally make it louder (unless the loudest level is already at 0dBFS), but it does not change the dynamic range of the signal. The "loudness wars" were all about using compression, and this does change the dynamic range of the signal (you end up with less difference between the quietest and loudest parts of the signal - hence the term compression).
You can still "sound louder" by using compression, even if the peak volume is still 0dBFS.
The solution is to analyze each song before playing it to figure out how "loud" it sounds to humans. Then the music player adjusts the playback volume of each track to compensate for the perceived loudness. This way, the user does not have to adjust the volume for each track that comes on.
This is just going to perpetuate the arms race, because it's not hard to spoof that sort of thing by monkeying around with crest factors in transient designers, plus people seem to have quite different preferences for compression/limiting. Just tweaking gain and falling back to compression/limiting past a certain threshold is just going to lead to pumping on some program material.
There is a standard for measuring this stuff, and thanks to years of people like me complaining about jumps in volume during commercial breaks on TV and the like, a loudness-measuring standard has been formalized and is being demanded by broadcast regulators (so it will become standard in audio production software over the next year or three). It's here: http://www.itu.int/dms_pubrec/itu-r/rec/bs/R-REC-BS.1770-3-2...
Waves, Dolby, Izotope etc. have all released plugins or free updates for 1770-3 comaptibility so it should become ubiquitous by mid-decade, as will automatic loudness normalization at the mastering stage, which wasn't previously possible in the absence of an industry-standard metric.
> compression makes all parts of the music almost equally loud
What you are referring to is only one particular use-case for which a "compressor" is used during music production: Most often people apply dedicated plugins to the "master" mix (the final result to be put onto CD or sold as a file). These plugins will apply what's called a "multiband compressor" which can reduce dynamics individually on different parts of the frequency range, and often do quite a lot more magic than what I claim to understand. -- Done excessively the result is the mentioned "always one volume" result of the loudness wars.
The compressor as a tool isn't limited to this use-case, though. You will, for example, put it on individual instruments' track to shape the relative strength of the percussive and decaying content of an instrument, a drum or a plucked guitar for example: Typical compressor plugins have a attack-time which shape how fast the reduction of gain follows an input signal. If you have this slower than the duration of the percussive sound, the instrument will sound more "agressive", instead of being leveled down, as the "attack" sound is increased, relative to the resonating portion. -- The end result sounds just like the exact opposite of the excessive master compression people complain about.
Then you will invariably have unintended peaks in a real-world recordings, and if you don't want to level down the whole recording to account for those peaks, or manually level them down for every instance, you'll also employ a compressor plugin on problematic tracks.
Then there's the possibility to filter the part of the spectrum that will trigger the compressor (it's "sidechain") or to let a compressor be triggered by one instrument (or group of instruments) and act on the signal of another instrument (or vocals, or group of tracks): That way you increase the percieved separation of different voices in your mix, again increasing the perceived dynamics of a song.
So, if you really want to find music with no compression at all, you'll likely only find classical recordings made only with one single X/Y microphone pair... :-)
The irony is that extreme dynamic compression only became popular after all of the mediums we listen to (CDs, MP3, digital radio, etc) went digital and we got >90 db dynamic range and signal-to-noise ratio. Tapes and records are typically less than 60 db. So our digital music is capable of being as dynamically expressive as our ears but we clamp/ruin the dynamic range down to <20db.
Not to mention that the biggest problem, at least in music, isn't the absolute peak (or normalization), but rather dynamic compression.
Everything mastered (or "remastered") since the late '90s has been destroyed with dynamic compression to make it "louder." It's the biggest crime against art in our times, but is poorly understood even by musicians who complain about it. Interestingly it's the older ones, like Neil Young and Bob Dylan who have been the most vocal about it, but incorrectly attribute it to data compression or sampling rates.
Radio stations often do additional processing of music to make it louder and more crisp when played on a car stereo, often by using techniques such as multi-band compression: the sounds is decomposed into several bands, and each band has dynamic range compression applied with different parameters to maximize the perceived sharpness/loudness.
It destroys a lot of subtlety and sonic detail in the original, but in exchange you get an overall louder, more in-your-face sound, with highs that come through even on bad audio systems. On car stereos, where you have a lot of low-frequency rumbling sounds, this especially makes a difference. And if you ask a random person to give a subjective quality assessment of original vs that processed audio, they'll almost always feel as if the latter is of higher quality.
Yes, but that's no barrier to compression, which reduces the dynamic range. Compressed material is all at about the same level. The reason people speak of the "loudness wars" is that after compression, even the parts of the music that would have been quieter are now almost as loud as the peaks.
It's actually quite ironic, I think, that compression has become such a fad only now that we have digital media with their much lower noise floor. We have more dynamic range available, but we're using less of it.
The main weapons in raising perceived loudness are brickwall limiters such as Waves L3 which raise RMS (root mean square) amplitude while keeping peak levels constant. I think soundcheck is based on RMS, please correct me if I'm wrong - if it were based on peak levels then as you say it would indeed be useless.
I don't know how much or whether the other technologies you mention really help to boost perceived loudness much beyond what is measured by RMS. Multiband compression raises RMS. Boosting high and low frequencies (aka "bare fat bass and mad amounts of high end" [1] - beyond what is wanted for a good sound) could be defeated by measuring RMS based on equal loudness/frequency curves. I've never really used the other things you mention but whatever they are they can be defeated by technology that measures their effect in the listener device. That's if they can cheat RMS anyway - I'm not sure they can but if you do have data on that I'd be interested to see it.
As you say, sound quality is still lost, but if normalization (over the album length where necessary, of course) becomes the defacto standard in playback technology then the incentive for making bad quality/loudness tradeoffs in album mastering is gone. The standard if adopted would restore the dynamic range by taking away the engineers' incentive to compromise it. The engineers would doubtless breathe a sigh of relief as most of them are more bothered about this than we are.
Analog radio would still be mastered stupidly loud. I would assume final stage compression in radio broadcast is keyed on peak not RMS level, again please correct me if I'm wrong. But it's a dying medium anyway.
I don't think today's professional mastering engineers are "poor" necessarily; some of this I think is marketing pressure. The Loudness War (https://en.wikipedia.org/wiki/Loudness_war) is a real thing and a large part of the decrease in modern mastering quality, in my opinion.
The thing is, this mastering technique has some upsides for both casual listening (it will stand out more from the rest of the pack) and also will sound "louder" on poor equipment without necessarily exceeding the equipment's capabilities. The significant downside is that a lot of the detail is lost at best, and at worst you get very audible clipping / distortion or unnatural "pumping" effects. So not good at all for those that go deeper in their music.
"Optimizing for all cases" might help end this loudness war though. Many of the online streaming services (Youtube, Spotify, iTunes, etc.) have some optimizing routines that aim for consistency in volume level. The net result is that extremely over-compressed music sound dull and flaccid.
Many articles recommend more sensible overall loudness levels now... although I haven't seen a specific LUFS number to hit, like there is for European television (EBU R128), aiming for something like -16LUFS as this article mentions is a much better situation than before.
Your post led me down the rabbit hole of over-compression, not for the first time, and I ended up at this video which gave a very succinct answer to my question of: "Okay, I realize that music is getting compressed, but what are we losing?"
This does degrade the quality, sometimes heavily. Nevertheless it is done by mastering engineers (they rarely enjoy it) as well as by radio and tv stations extensively because of the psycho-acoustic fact that a songs appears to be better if it is played louder. This gives them an advantage over the competition: On average, people searching for a radio station are more likely to listen to your radio station if it is louder than the competition.
The main issue lies in the fact that the current peak measurement of audio signals does only marginally correlate with the perceived loudness and heavy compression is used to trick this system. The broadcasting industry is aware of this. An open and quite effective loudness measurement algorithm [0] has been introduced a few years ago and it gets slowly adapted all over the world by new broadcasting laws: AGCOM 219/09/CSP (Italy), ARIB TR-B32 (Japan), ATSC A/85 PRSS CALM Act (US), EBU R128 (Europe) and OP-59 (Australia). iTunes Soundcheck is also based on [0] and since this year Youtube applies this to newly uploaded videos as well [1]. Even games use [0] to keep their audio at a consistent loudness.
So slowly, the over-usage of compression does not give music producers and broadcasters any advantage anymore and beautiful dynamic music will be competitive again.
I have collected some links [2] about this topic. Because of the lack of any affordable implementation at the time I created one myself [3] with some additional notes [4].
[0] ITU-R BS.1770, http://www.itu.int/dms_pubrec/itu-r/rec/bs/R-REC-BS.1770-4-2... [1] http://productionadvice.co.uk/youtube-loudness/ [2] https://www.klangfreund.com/lufsmeter/manual/#about_loudness [3] https://github.com/klangfreund/LUFSMeter [4] https://github.com/klangfreund/LUFSMeter/tree/master/docs/de...
reply