Digital hearing aids are producing sound as lousy as MP3s

I can see that some people really don’t understand. Unless you have lost your hearing, you probably won’t understand. So perhaps I can try and help explain here.

I am profoundly deaf and have been deaf since birth. Had digital hearing aids been invented many years ago and I had started with digital hearing aids from the off, I’d most likely not know any difference in how the world sounds so I would be accepting digital hearing aids as offered by many audiologists. But that’s not the case. I started with analogue hearing aids, with large boxes worn on the body because the tiny In-The-Ear (ITE) In-The-Canal (ITC) or Behind-The-Ear (BTE) hearing aids weren’t available yet so basically, these large boxes where just power amplifiers similar to what was being used in everyday cassette recorders, radios, record players etc. of the day.
So as I’m sure you will understand, the sounds received were simply amplified and sent to the ear, no fancy processing, nothing altered. And this was my world for about 40 years. My ears, the sounds received from the world to my analogue aids to my ears were processed by my brain and were pretty much the sounds every normal hearing person hears.

Then along came digital hearing aids. The “beauty” of digital aids was that audiologists could alter the programming to change what happens to some sounds to suit a deaf persons hearing loss. So for example, if that person has a loss so great in the higher frequencies, the digital processors can shift those higher frequencies down to lower frequencies that this person can hear. Therefore birds start tweeting in mid-tones. Your wife suddenly sounds like your boss. Further, digital processing can also compensate for feedback, the annoying whistling we get when our hearing aid moulds are not fitting so good. Digital processing has now got so good that feedback is pretty much totally eliminated. It’s great. But there’s a drawback - when sounds from the outside world are received by the digital hearing aid similar to what sounds like feedback - similar frequencies in music for example, birds tweeting, the human voice, people whistling, your child playing the recorder, the digital hearing aid thinks it is feedback and so attempts to stop it, resulting in a warbling sound in our ears, rather like when you speak into a fan. And there’s other digital processing that can and is performed which the makers of digital hearing aids and audiologists alike think we like to hear, are adjustments to benefit us, such as making digital aids cut out background noise when in a restaurant (say) allowing the digital aid and you to concentrate on hearing a voice spoken to you. Again, this can be a good thing, for the deaf. But imagine, if you are hearing, and you were sat in a similar noisy restaurant and suddenly all the sounds around you stopped and all you could hear was your partner talking. You would find that pretty strange and unnerving, wouldn’t you? And yes, it’s unnatural. It’s not right. WHY should we, the deaf who have been lucky enough to have experienced the sounds of the world suddenly be told what we can and cannot hear, what we should and should not hear and have our previously understood and accepted frequencies shifted?

THAT is why many of us want our analogue hearing aids back. So we can continue to hear the world as we’re used to. I’m quite happy to have digital hearing aids as long as they are programmed so that they just amplify sounds without any of the fancy-schmancy processing mucking up our beloved sounds. This is why I have actually recently bought myself an hearing aid programming box off fleabay called a Hi-Pro and when connected to my digital hearing aids via cables and using some software on my PC, I have been able to turn OFF all the fancy digital processing my audiologist set up, effectively making my digital aids analogue in operation. Almost.

So it CAN be done. That, or give us our analogue hearing aids back again.

22 Likes

Lossless algorithms do not discard data, they are LOSSLESS. These hearing aids and lossy encoders such as MP3 are not using lossless algorithms by definition.

7 Likes

Home-brew analogue is not all it is cracked up to be.

hearing%20aid2

6 Likes

Shit, my bad. I blame the drugs. (Ya want some?)

Yes, I meant the LOSSY algorithms that are at the heart of everything JPEG/MPEG derived, including the oh so aptly named Xing encoder zhudder that would hopefully be considered a war crime to use in a hearing aid.

3 Likes

No wireless, but enough space for a nomad.

1 Like

Out of genuine interest, as someone with tinnitus and an almost complete inability to hear someone across a table in a crowded, hard-surfaced, noisy bar, and likely heading for hearing aids one day, why could your audiologist not set it up as you desired? Could you not go back to them and ask for an adjustment? Seems like having to hack one’s own h/a is going to be beyond most people.

Are there digital devices with more than one mode? E.g. an ‘analogue’ mode (as you made for yourself), and a ‘customised by your audiologist for you’ mode, and maybe a ‘loud bar’ mode?

1 Like

Yaay! I’m an analogue nomad! :wink:

Good question.
I actually asked my audiologist once if they could turn off this and adjust that and increase the volume a tad more. Her reply was something like “No sorry, the aids have been programmed to your audiogram and adjusted specific to your hearing loss. There’s nothing more I can do to get the volume any higher, I have to go by the book.”

For a few years, I accepted this until I discovered that I COULD alter the way it works for myself and get more power for myself and many other adjustments besides.

10 Likes

I actually searched and I dont know what codec is being used in digital hearing aids. 96kbps Xing could be an improvement, as horrifying as that sounds.

I am tempted to adapt Schneier’s law for lossy audio codecs

Anyone can create an algorithm that sounds good for them.

And I’m sure it’s good for the manufacturers, but it isn’t for deaf people.

2 Likes

I’m not sure why they need to encode in a lossy codec in the first place.

Oh no, these things talk using Bluetooth don’t they? Bluetooth’s audio compression dates back to the 90s and has basically been fossilized into the spec ever since. The most recent spec finally (finally!) defines some better encoding, but adoption is sporadic at this point.

The worst part is that they could transmit raw 48Khz 16bit stereo samples and use only a fifth of the slowest possible Bluetooth datarate. There’s really no need to compress at all except to save power on the transmitter and to make implementation easier (Bluetooth chips come with built-in audio streaming profiles based on the spec, a raw audio spec would require extra support from a processor on both ends).

2 Likes

Good to know! So a hearing aid sampling at 22 kHz actually make sense in this context… Intelligibility of speech is mostly dependent on the 1 to 4 kHz range with some signal in the higher ranges. Good old Nyquist tells us that 8 kHz should cover this, and 22 kHz should mainly makes things sound better (not necessarily more intelligible). I am going to assume that much smarter people than I thought about this when designing phone systems…

So yeah these wouldn’t be great for music, but 22 kHz should fit the medical goal of understanding conversation, assuming reasonable quality of manufacture and design.

1 Like

Would definitely keep my ears warm.

Cell phones use(d) even worse compression. It was so bad that reading off a long string of hex digits over the phone was almost certain to generate errors unless you used your NATO alphabet. Having some guy from the field call in because his copy of Windows has forgotten its license and you need to punch the code into Microsoft’s site and then read back the results opened my eyes to just how terrible the quality of the audio is on cell phones.

3 Likes

FYI, aggressive noise removal sounds exactly like aggressive MP3 compression.

There’s no reason for a hearing aid to use any kind of (bit rate) compression, but if they try to remove all the background noise, the result will be what the original post is complaining about.

3 Likes

Thanks. Sounds like too much adherence to the book and too risk averse to do what the patient wanted. May have feared comeback (being sued?) if what you asked for had proved detrimental. Sigh. H&S and litigation cultures. And you didn’t, presumably, just ask for the volume to be ‘higher’. Good to know they can be programmed like this, though - does raise the question I posed earlier: if they can be programmed to a ‘profile’ they could in principle be programmed to offer multiple/alternative profiles.

2 Likes

That, I would say, is malarkey. A small, niche subset of the music industry is interested in analog. The “music industry” as a whole is by no means moving towards analog. It otherwise continues to trend entirely digital. And digital isn’t the problem. The finest, highest fidelity audio recording with the greatest dynamic range available is digital. The problem in the hearing aids, assuming the reporting is accurate, is low fidelity audio, which is not something inherent to digital audio.

3 Likes

I would say yes, they can. There are a number of things that can be turned off or on in the software, you have control over the MPO, the Gains, the levels over given frequencies, frequency shift and some. What can be changed and to what extent varies from model to model and software provided for the changes allowed though but yes, it’s there.

Or more correctly, the problem is in the nut that attaches the bolt, aka the audiologist doing the programming.

6 Likes

It also wouldn’t at all surprise me if the market for hearing aids pushes ‘optimization’ in specific andnnot necessarily helpful directions:

If you are selling consumer electronics, you have an incentive to sound ‘good’, even at the expense of fidelity(audiophile market aside).

If you are selling expensive medical devices, you are probably much better off with something that scores well on whatever tests audiologists use to judge improvements in ability to hear and distinguish relevant stimuli; regardless of whether the results are aesthetically pleasing or not.

I suspect that there are actually some very clever tricks to be found if the task assigned(implicitly or explicitly) is to increase the percentage of words identified correctly or improve distinction between tricky phonemes and speech and background noise.

They just don’t necessarily involve output that the hearer will like.

If this hypothesis is true, I’d imagine that DSPs give designers a lot of room to do stuff that would have been off the table in a severely power-constrained analog system, leading to more radical munging; but that the metric being optimized is moving in the desired direction.

The ‘lazy oligopoly’ theory could still explain why customer satisfaction isn’t considered a vital metric; but unless performance is actually stagnating or declining across the board, not just suffering in lower priority areas, we can’t safely assume that work isn’t being done; just that its goal may have diverged from that of the end user some time back.

2 Likes

the digital processor samples incoming sound at a rate far lower than that of an old CD player, effectively turning the entire world into a giant MP3 file

I believe the author would prefer a new hearing aid that sounds like his old one

I do not believe he understands how the new ones work, or what the problem is

1 Like

It seems like the solution there is to have multiple modes that the user can select - one with maximal processing to obtain good results on whatever metrics, and one to provide a pleasing and naturalistic experience. I think I have heard that the nicer hearing aids do actually have something like that.

1 Like