Suspect my headphones are just bad, but bad in ways I intimately understand
I picked exclusively the 320kbps mp3 files. Again mainly by hunch than by conscious choice.
I wonder if I’ve become habituated to that sound by my listening habits and don’t appreciate the wav format for that reason?
I got half right, and yeah no real difference that I could tell. I imagine with maybe way better headphones I could do better but probably not as much to the audiophiles bane double blind studies have shown they can’t tell or favor CDs over Vinyl.
A few years ago I ran a similar test for myself, testing CDs, 192k MP3s, 320k MP3s, and a few lossless formats against each other. The test was on a mid-range home audio system (Denon electronics, Paradigm speakers). Although I thought I could discern some differences, it took much effort to do so.
I picked the WAV 5 out of 6 times, listening through earbuds my daughter got at the $5 store. The only one I got wrong was the Jay-Z sample - they all sounded like garbage to me.
They all sound identical to me. So I didn’t even bother guessing which was which.
audiophile, n: Someone who listens to the equipment instead of the music.
I’m not shocked that I can’t tell the difference between 320k & wav. Even 128k is pretty close, although it’s recognizable in a side-by-side comparison
I find this kind of thing interesting primarily because it gives me some insight into what I can get away with when I am compressing files. I’ve usually gone with 192k when it’s music for personal use - that’s my personal cutoff above which I can’t distinguish a difference
I also used to listen to Solid Steel when it was being streamed at 64K - the quality drop was noticeable, but still not bad enough to seriously impede enjoyment.
for further audiophile / nerding out consideration:
MP3 quality is not solely determined by bitrate. There’s usually some tradeoff in the speed of the compressor vs the quality of the compressed file for a given bitrate. 128K mp3 made with a crappy compressor could sound pretty terrible
I assume that “wav” in this case means CD quality audio, but that’s not necessarily the case. wav supports a broad variety of sampling rates & bit depths.
I guess that sounds clever, but in practice it’s exactly the opposite. The idea is to get rid of everything which colors the sound in any way. When using poor-quality playback equipment, the artifacts one hears as a result are introduced by these.
I have already done too many of these tests, but I’ll just throw out there that MP3 compression is based upon losing “unnecessary” data by means of perceptual coding algorithms. Which means they guess what you won’t notice, based upon suppositions about your hearing, and also the kind of music you listen to. If it has “normal” harmonics, most examples can sound ok. But for atonal music, MP3 tends to be kind of useless. If you’re encoding Merzbow or Gunung Jati, you might notice the differences far more easily.
I picked 2 of each, so no.
But I wouldn’t choose to listen to any of those pieces of music. At a push, I’d say I could maybe tell a difference on the Mozart (which I got right), and that’s it.
Also, my hearing is shite, my headphones were pretty cheap and I was listening via my laptop.
I remember when I was at uni and we were looking at how minidisc files were put together (IIRC, they throw away about 80% of CD data by getting rid of frequencies you can’t hear, or would be masked by other ones?), and being told that people could apparently tell the difference between them and CDs, but there wasn’t a clear preference one way or the other.
I was 3/6 on 20 year old bookshelf loudspeakers that I repaired myself with rubber cement and an old dress shirt.
All 3 of my misses were picking 128, and most of my collection is encoded using the R3Mix Preset of 128-224 VBR.
I can’t tell the difference between a 1411.2 kbps FLAC and a cheesy MP3 ripped from the same CD of Tuvan throat-singing. Seriously, I’ve tried. I’m almost entirely deaf in one ear, and about half deaf in the other, so this comes as no surprise. Other people hate it when I equalize a stereo for my ears!
Nonetheless I ripped all my CDs lossless to FLAC and I ripped all my cassettes to the same 1411.2 kbps. It seems foolish to me to degrade the music to the poor capabilities of my ears. Not only can my “golden ear” friends enjoy the full quality of my recordings when they visit , but also I can copy my music to other media (for instance, back to audio CDs) without losing quality, and convert to other formats without xerox-of-a-xerox effects.
It feels like they tried to make this harder on purpose. The most obvious giveaways on bitrate are usually in the drums - especially cymbals. These songs are almost entirely devoid of actual drums.
I don’t get Rob and others’ weird ongoing vendetta against audiophiles. Sure, the industry has some snake oil, just like most industries that sell components. There are “performance” add-ons for cars that don’t add any actual performance too, but people buy them. So what?
When I have listened to $1K, $5K, and $30K audio setups, the difference was completely obvious. The two $30K+ audio systems I heard revealed aspects of the music that I had never heard before, and the sound of the drums was visceral. That whole “imaging” thing is very real - I felt like I could see the stage. Sure, that might not have to do with the speaker cables or the power cords, but there was a huge difference. I would suggest that if people can’t tell the difference between a poor quality and a high quality audio setups, their hearing might not be that good.
To me, 128K mp3s sound like crap on pretty much any set of headphones or any speaker larger than a silver dollar. I keep turning up the music, but it just gets worse, not better. On most equipment, 320K mp3s are fine with me. But it wouldn’t surprise me if the .wavs were noticeably better on a good system. And I just don’t get the endless tirade against people who are willing to put in the effort and money to listen to great sound.
Exactly this. And treating a choice of 320MP3 as failure? It’s obvious the person didn’t choose 128, so that should have been a success. Definitely a slanted quiz IMHO. Bad statistics bother me greatly.
I usually find that the importance of the compression depends on the range of the music. E.g. the beginning of the Neil Young song for me was indistinguishable really. But at the end of the snippet, when a mix of orchestra instruments comes in, then I could here a difference (not huge, but there). The Mozart clip was also easier to distinguish. A lot the Pop heavy modulated music though, it’s hard to imagine it mattering at all.
Absolutely right. With those samples, there was only one where I could confidently distinguish 320k from wav (128k was always pretty obvious). It was the Mozart, which, being an old recording, had a little bit of white noise, which you can hear the MP3 encoding in. For most music, especially pop, 192k is enough. For, as you mentioned, Merzbow, and also for bagpipe, hurdy-gurdy, or to a lesser degree, accordion music (lots of overtones), and most of all for old recordings that have significant white noise, the encoding is pretty obvious. It’s kind of funny that low-fi, old recordings are the ones that become almost unlistenable with MP3 encoding.