There can be a lot of complacency on these subjects. We are used to it. Reminds me of a book a few years back on ‘going naked on the internet’. Read a recent article on high school kids being interviewed after their sex tapes and pics were picked up by third parties. They were “and…?”
The older generation teachers and interviewers didn’t get it.
Probably, even adds some thrill. People like being the center of attention and tend to highly overestimate how much attention paid to them. (*The Purple Gorilla book). Even the idea of an audience can add some additional sense of self-importance, and some would argue that gives their lives a bit more of a sense of purpose.
“I need a sandwich, a jammy dodger, it’s just bread and jam!”
Interestingly this was a plot point in last week’s Person of Interest.
The firm Fetch and Retrieve (a google/facebook stand in) was having people listen to the voice requests to see how accurate they were.
Gotta watch PoI people! Best SF show on tv these days.
But how do you improve the technology unless you do something like this? You can’t both be upset at this and upset that Siri has trouble with, say Japanese accents. There’s just too much variety among voices for canned tests to be sufficient. We just have to recognize that if you use this technology you are part of an experiment.
Well, you could provide some way for the users to knowingly select which bits to do this on. Google Voice does something like that, where you can “donate” specific voicemails to be manually transcribed and looked at for accuracy of the algorithms.
The experimenter’s desire to get as wide a range of input possible shouldn’t unilaterally override the test subject’s privacy.
And Apple has settings for standard and enhanced dictation, where enhanced dictation doesn’t require an internet connection and isn’t supposed to send them anything.
But if they really want to improve transcriptions then they need to recognize more dialects and dialect variations, and they need to be able to compare the sounds, and the machine transcriptions, with the intended texts, which won’t really work without volunteers reading set texts, or talking and providing their intended texts.
Snooping seems … an inefficient, as well as an unethical, way to try to improve transcription.
I see this as a short-term concern. Once algorithms are perfected, why would any organization pay people to listen to dull fragments of inconsequential conversations? Except the NSA, of course.
I also saw an article about how Facebook is now connecting people to suicide intervention resources based on the contents of some of their posts and queries. This was announced after the episode aired, by the way
“Tim Robbins is a creep”
What the hell was I doing that day?
POI is becoming more and more a documentary rather than fiction these days.
I imagine this tech will soon get matched up by police departments with GPS and other scooped data to issue tickets for using a handheld device while driving. I hope I’m not just imagning things, but if the recent past has any predictive value for the near future . . .
I’m thinking that this isn’t really that bad in itself if they’re only using the clips to improve their service. It’s no different from typed Google searches. They need to be more up front about the fact that the services aren’t really private though. If they consciously don’t do that because they want people to use the services more they need to change that attitude.
POI is propaganda.
Why do people leave dramatic comments like these without explanation? OKAY IF YOU SAY SO RANDOM INTERNET PERSON!
Thumbs up to Google for the disclosure (for once). I definitely recommend the selective “donation” instead of the full-out “send everything to Google” option. They may not attach your names or other metadata about the voicemail, but anything that has names or phone numbers in the content could be sent along.
I came here to make the same comment. I thought that the job in that episode was just made up for the show, but here it is! Next thing you know, its going to turn out that the NYPD is prone to corruption…
The question is whether Facebook is also ‘connecting’ potential employers, insurers, schools(if of college-applying age), and various other interested parties with this information…
Algorithmically recommending that people reconsider self-termination is the sort of peppy interventionism that I find rather grating; but it is relatively innocuous and, who knows, might result in some people successfully getting to the point where suicide no longer seems better than the alternative, which would be nice.
The more…lucrative…applications of preemptively identifying likely psych-nuisances(and without recourse to HIPAA or otherwise protected health information!) are all way, way less benevolent; but hell yeah they’d be marketable.
Your Samsung TV is only listening for the specific cue words that are used to initiate commands. Everything else goes into the bit bucket. Only when those cue words are detected, does it actually start “listening” for a command to carry out. Same Google Now.
So the Mechanical Turk works more-or-less the same way, but it’s whiz-bang internet, so it’s okay. This service is not whiz-bang, so it’s bad. The article says, essentially, “This is legal, and it’s not very innovative, but BE ALARMED!!!”