This seems unlikely. CDs don't have physical sectors. But they do have generous error correction, with cross-interleaved versions of the data combined with parity spread out along the "groove".
The rule of thumb is that error correction can compensate for gaps of up to 2.4mm. So if a hole is smaller a CD should be able to cope.
>audio CDs which I've found don't use checksums or other mechanisms to prevent small defects from mangling the bitstream
Redbook audio CDs does use CIRC error correction, by storing 8 bytes of parity data inside each 33 byte F3 frame. It is not enough to correct all errors though, and Yellowbook data CDs stores extra correction codes on top of it. (276 bytes for each 2352 byte sector).
(and there's also issue below F3 frame level about EFM modulation, whether merging codes are generated properly to keep DSV low enough)
> CDs have significant error correction codes so if it sounds right it IS right.
For data CD formats, yes. For audio CD formats, readers are allowed to interpolate over uncorrectable errors (https://en.wikipedia.org/wiki/C2_error), which would not necessarily result in an abrupt skip or pop.
To be honest, it's probably less difficult than the tracking that a compact disc player has to achieve to play an off-centre bent wobbly CD. The error correction would be fairly similar too. How long have CDs been around now?
I think CDs use two layers of codes interleaved in some non-obvious pattern. Errors in the smaller blocks that take out a whole block are actually spread amongst multiple large blocks, and you know what bits are bad so you're correcting erasures instead of errors, which is easier.
The 100MB difference is not just due to the audio TOC being of smaller size than the ISO9660 or UDF file system metadata. It's also because of differences in error correction. I don't have the spec on hand but I recall from when I was investigating this that CD-ROMs use more bits for error correction than audio CDs. That's why you can fit more audio data than "filesystem data" on a CD-R. Reading (ripping, digitally) an audio CD will likely result in different digital audio files every time, since the error correction is not that good, but good enough, for audio.
I read into this when I was wondering why my CD-DA extracted .wavs came out with a different checksum every time. Vibration is one of the factors that would make the same audio CD, read with the same CD player, produce different digital signals some of the time or even every time.
CD-ROMs however, which store digital data, need better correction - you definately don't want a bitflip in your .exe, while a minor amplitude diff — an uncorrected bitflip in the upper bits of a 16-bit PCM signal — is no biggie.
So… I'm not saying that the people using CD mats are informed (or have tested whether the mat makes a difference, or would even know how to go on about testing this, scientifically), but there's more to it than what I originally thought — which was "it's digital so it's never degraded". I wouldn't have known without checking the md5sum of my .wav, though.
Yup, the correct signal might not get through if you don't have perfect shielding. And a speck of dust on a CD could theoretically ruin a whole song.
That's why CDs (and cellphones, modems, and countless other digital devices) use channel encoding.[1] That way you don't have to have a perfectly noise-free signal to reconstruct the original information.
I don't know that many specifics of the DVD or Blu-Ray standards, but I'd put money on them using Reed-Solomon or something similar.
An audio CD has 2352 audio bytes per sector. The sector also contains C1 error correction and C2 error detection.
On a data CD, those 2352 bytes are split in 2048 data bytes, plus an additional 4 error detection, 276 error correction, plus some other bytes including an address. So there is an extra layer of error correction.
Both can be true. I take the parent commenter to be talking about reading data off of the CD in a way that accounts for the irregularities from inconsistent spinning speeds, (I distinctly recall a friend pulling a CD out of the CD player and placing it back in to resume spinning, while a song was playing without any break in the song). Perhaps, additionally, the was some degree of accounting for resilience to things like blemishes and fingerprints.
It can nevertheless be true that the CDs were failing all the time. So robust error correction is real to achieve the degree of functionality that we did enjoy during the heyday of CDs, and despite this, the fragility of CDs as an information medium meant that they were still disappointing us.
The thing that's interesting about digital data, such as the CDs, but also say a stone tablet with carved lettering - is that this is a binary occurrence.
With non-digital data (e.g. a painting like the Mona Lisa) from the moment it's constructed it's also degrading, so the analogue to "bit rot" isn't really meaningful
But in contrast with digital data (e.g. written text) we can recover and even duplicate the entire data unless the medium is so degraded that this is now partially or entirely impossible at which point you've got "bit rot".
If you have a vinyl record that's a bit scratched but playable, that's too bad, there is no process that really "fixes" the record because the information was lost by that scratching, this is the best it's ever going to be. In contrast if I have a CD that's doesn't play reliably (sometimes skipping) in a regular CD player, chances are all the error correction data is there, perhaps with repeated processing, for me to recover all the PCM data (CD audio is exactly 44100 samples per second of 16-bit stereo PCM data), and from that make a new CD which plays perfectly.
CDs are a physical media storing digital data. The data is stored in microscopic pits on the disc surface that, like all physical media, can be subject to read errors, due to various imperfections. It has built-in error correction but it is not perfect, so that program reads the same thing multiple times to get statistical confidence that the data is correct.
I suppose if you use really crappy media and an even worse CD recorder you will have significant signal degradation because of all the bits missing so error correction is kicking in
What is interesting is that it's possible to throw a disk (69m) further than a CD (68m). Perhaps the hole in the CD makes a difference, or the disks are smaller overall?
Audio CDs are still able to play back with significant bit errors (CD players just interpolate over the unreadable parts). It’s different for CD-ROMs and data CD-Rs.
My experience is that bit-rot with CDs is excessively rare.
Scratches and scuffs are rare "with good care". Even then, deep scratches are the real issue. Minor scratches are often irrelevant with the error correction.
So how does one need to wonder if the CD will even play?
That sounds unlikely to work. I had a professor at the university who started his lecture on ECC by saying he always drilled a hole in all his CDs since he knew it would work anyway.
Or perhaps the ECC simply works well enough that there's no perceptible data loss for audio?
The rule of thumb is that error correction can compensate for gaps of up to 2.4mm. So if a hole is smaller a CD should be able to cope.
reply