The promise and peril of "sonification": giving feedback through sound

Which is pretty anti-social if one is out in public. Round here though it seems to have peaked a while ago - mostly people are now plugged in, hopefully because they realised nobody else wanted to listen to their music, and they didn’t want to listen to anyone else’s, in turn.

Mind you, the funniest thing I saw recently was … well, you know how some people hold their phone horizontally flat in front of their mouth with the speakerphone on (amazing how many people do not realise how effective microphones are - I mean, if you want to talk a few millimetres from the mic, just use it like a phone and not a speakerphone) - well this guy I saw was holding his phone horizontally flat at the side of his head sticking out sideways from his ear, on speakerphone. I guess he’d figured out the mic worked even at that distance from his mouth, but I still couldn’t fathom why if he wanted the sound source right in his ear, he couldn’t just - again - use it like a phone! But no, we all had to listen to both ends of his conversation, too.

And my sample is different because I am not of around your age and nor were the people in my office (a much wider age range). But those I observe out and about with earbuds in are typically much younger - it is less common to see older people with earbuds in, in public, round here.

Noise pollution is a real thing and as @oldtaku said - they’ll never do it elegantly. So I’d prefer to err on the side of “No. Just no.”

2 Likes

People whatsapping on the bus:“Plonggg”. “taptappitytap”. “Plonggg”.“taptaptap”.“Plongggg”.

For god’s sake turn off the sounds! It’s driving me crazy.

Maybe they can invent direct-to-brain interfaces for this stuff. That way all the near retards (you never go full retard!) can rot their brain without bothering me.

Yes, it helps getting it of my chest. Thanks for asking. I’ll take my medication now, no need to worry. :slight_smile:

I know lots of older people who can’t use earbuds anymore due to hearing problems. Too loud concerts/earbuds have wreaked havoc on the ears of a generation including mine. If I use earbuds for an hour a have a buzz in my ears for 2 days, no matter I turn it down to near inaudible.

1 Like

Once upon a time a mechanised battalion deployed to one of the interminable wars that have sputtered on since 9/11. Their vehicles had just been outfitted with a you-beaut situational-awareness network. Screens in every vehicle displayed a map showing where they were, where their friends are, enemy sighting reports, future plans, aircraft movements, the whole nine yards. And a chat/email function to allow asynchronous communication and file sharing. Awesome.

Being a mechanised unit, the crew stay with their vehicle at all times, and it’s critical that the crew be looking out and around them, rather than down into the turret trying to figure out WTF is going on. The crew also wear helmets with integrated mike and headphones for voice communications so they can talk to other members of the crew, and the crews of other vehicles. The system designers came up with a series of unique aural cues and alerts for different events. And, being smart, the designers figured out that they could use stereo - directing sound to one ear, or the other, or both, depending on what they wanted the message to convey.

Setting up the network and outfitting all the vehicles was a significant technical challenge, and a significant financial undertaking. But, it got done, the training was conducted, and the battalion declared ready for war.

A week later a corner of their FOB was decorated with a large pile of equipment worth many millions of dollars. The constant stream of pings, pongs, bleeps, and other distractions was literally driving the crews mad, and sapped their situational awareness so badly that they had no fricking idea what was going on. So it all got ripped out and dumped, never to be used again. The battalion reverted to voice-based communications and all was well.

I have trouble believing that an app developer - who is strongly financially incentivised to maximise ‘engagement’ and distraction - is going to come up with a more coherent and unobtrusive system than the developers of that military system. Regardless of what you think of the military-industrial complex, the engineers do identify with the soldiers they interact with, and genuinely want to help them to be more effective. App developers, on the other hand … not so much.

6 Likes

is lcars the best example of MovieOS?

Yes, I’ve used IRIX, but, as wikipedia explains

The O2 used the CRM chipset specifically developed by SGI for the O2. It was developed to be a low-cost implementation of the OpenGL 1.1 architecture with ARB image extensions in both software and hardware. The chipset consists of the microprocessor, and the ICE, MRE and Display ASICs. All display list and vertex processing, as well as the control of the MRE ASIC is performed by the microprocessor. The ICE ASIC performs the packaging and unpacking of pixels as well as operations on pixel data. The MRE ASIC performs rasterization and texture mapping. Due to the unified memory architecture, the texture and framebuffer memory comes from main memory, resulting in a system that has a variable amount of each memory. The Display Engine generates analog video signals from framebuffer data fetched from the memory for display.

1 Like

Thanks for that story. (It constantly surprises me - and it ought not to by now - what amazing things some of the community knows!) :wink:

I was going to say that what was missing from your narrative (and may or may not have been missing in reality) was the weeks and months of training, rehearsal, practice and exercise after exercise to make the aural inputs become second nature and like muscle memory - like Pavlov’s dog, each distinct sound should subliminally and immediately signal its meaning to the recipient, without distracting from or disrupting all the other stuff they always need to to or be aware of. Given all this, it is plausible the system MAY have been effective and successful - or at least adapted based on live-use feedback.

But I fear the phrase “the training was conducted” may not represent such in-depth acclimatisation and user testing, and may have been simply ‘education’ with some training, and tests to check the information had been acquired. But acquiring the information is a very long way from seamlessly and instinctively deploying it in practice, as this story suggests.

Military and app developers, despite their best intentions, are not practitiioners and in my general experience fail to provide for, or are not sufficiently funded (time AND money) to conduct anything like enough user experience testing - not only after development, to check it works as expected in the real world, but - critically - at the specification and development phases to check if the right approaches/methods are actually being considered before the product is built.

So not only won’t these sonic input app merchants not do it elegantly, they won’t even know there’s an elegance to be achieved, and will be mystified as to why what (to them) clearly functions, doesn’t actually work and is not adopted.

Your example is a classic case of a system that clearly functions, but does not work.

3 Likes

I’ve long believed their is great unexplored potential in audio user interfacing and the use of ‘soundscape’ environments. Many years ago I became concerned about the lack of any native computing environments for the blind. It struck me as especially stupid that PCs of the time still required a monitor plugged in just to switch-on. Computing seemed to me a basic form of literacy and personal independence and a computer that really wasn’t adapted for the blind was leaving many people behind in the so-called information revolution.

It also occurred to me that an audio computing environment had great potential for true mobile computing. Screens and graphics constitute the largest portion of mobile computing device cost, power consumption, weight, and processing overhead. An audio computer with the general power of a laptop could easily fit in the form-factor of an MP3 player. Combining this with the use of chord-keypads --and perhaps some day sub-vocal speech recognition as NASA was toying with at the time-- meant real on-the-go writing and messaging capability. This could be a boon to field journalists, scientists, or perhaps soldiers.

I also saw much potential in the creation of audio adventure games making the most of 3D audio and the sound stage techniques of radio theater.

Of course, there would be some compromise in this, given how visually-dependent out culture is. But having delved into the work of R. Murray Schafer and his soundscapes concept, I realized there was much more potential in sound as an information environment than we usually realize. And if, at least, one could devise a practical means of word processing in audio, well, that covers the vast majority things we use computers for even today. It’s mostly all variations of that. At the least, there should be a useful market niche.

When NASA Tech Briefs introduced the Design The Future competition, in 2002, I submitted a proposal for an audio operating system called Mozart and a mobile pocket PC based on it. Mozart was based on the idea of ambient 3D audio soundscapes as the defining ‘space’ of applications and the use of ‘audicons’ in those spaces for functional cues. I imagined word processing that used text-to-speech to read passages a sentence at a time, with chord key interrupts working like the standard buttons of a tape recorder accented with audicon feedback. Text input with concurrent sound feedback would then be done by chord entry. Audio coding could work similarly, with, perhaps, a preference of threaded-interpreted languages like Fourth.

I never got much feedback on the idea, but my competition proposal it did win a prize; a free pop-rivet gun.

3 Likes

The computer I use is a perfectly normal laptop running Windows 10. It’s in the software where the “magic happens”. I use a program called a screen reader to access the computer. A screen reader intercepts what’s happening on the screen and presents that information via braille (through a separate braille display) or synthetic speech. And it’s not the kind of synthetic speech you hear in today’s smart assistants. I use a robotic-sounding voice which speaks at around 450 words per minute. For comparison, English is commonly spoken at around 120-150 words per minute. There’s one additional quirk in my setup: Since I need to read both Finnish and English regularly I’m reading English with a Finnish speech synthesizer. Back in the old days screen readers weren’t smart enough to switch between languages automatically, so this was what I got used to.

He includes several samples of his speech to text interface. It is utterly alien to me as a sighted person: robotic and more than three times faster than standard english. But then, I’d much rather stare at a silent screen, an option that he obviously doesn’t have.

5 Likes

What you’re describing is the difference between Acceptance Testing & Evaluation (“Have we built the thing right?”) and Operational Testing & Evaluation (“Have we built the right thing?”).

As I recall, everyone was under the pump to get the system ready and fielded in time for this unit’s deployment, so user feedback loops and training were probably truncated. Again as I recall it the concept was basically sound - I believe something very much like it is used fighter pilots - until the good ideas fairy came to visit. Everyone wanted ‘their’ thing to have an audible and unique alert, and with weak project oversight there was no one acting as gatekeeper. That was compounded by an operational issue - the crews have to wear their helmets for safety and because they utterly rely on uninterrupted voice comms - mechanised transport is noisy. So the lads were caught between the rock of needing their helmets and the hard place where their helmets were causing them to fail.

The failure of the system on operations was so bad and so public that the concept is now so thoroughly discredited that no one else will touch anything like it with a bargepole.

4 Likes

And I find myself wondering, yet again, if any of these projects (military, private, whatever) thought to put a blind person on the team.

I, for one, appreciate it when the music suddenly changes to something ominous. How else would I know an enemy is nearby?

5 Likes

If you don’t already know it, you might like:

The protagonist has a (sadly, initially malfunctioning) synthesizer implanted which generates an appropriate soundtrack for whatever is going on.

3 Likes

“Every man is a hero of his own story.”

Got to have theme music then.

2 Likes

The failure of the system on operations was so bad and so public that the concept is now so thoroughly discredited that no one else will touch anything like it with a bargepole.

Which is a shame, because I’m not as pessimistic about the potential for a system like the one you describe as anothernewbbaccount is. I’m sure the properly designed system will be a rarity in comparison to all the poorly-designed ones that app developers will pump out, but I can’t think it’s impossible. Just the customization I’ve done to my phone’s notifications (audible and vibration) to avoid pulling it out of my pocket every time it dings makes me believe it can work. Then again, I’m a trained acoustician, so I’m partial to audio solutions.

1 Like

This topic was automatically closed after 5 days. New replies are no longer allowed.