For most people, most of the time, these issues are a matter of principle rather than immediate danger. Yes, Echo potentially opens you up to some nasty things, but so does living in a house without barred windows and triple-locked doors. We all sacrifice safety for convenience every day.
The principle is real and important! It’s not good to normalize creeping invasion of privacy, not for a little minor convenience. But real, immediate quality of life is more important. Abstract principle vs. helping your spouse live well shouldn’t even be a question.
Sure, that’s possible; in the case of the Echo and gHome, they’ll be connected via wifi, so if you had AP isolation turned on for clients, there’d be no knocking them over from the LAN (and you can’t / don’t want to access them directly, at least my impression is that everything is phone app -> mothership -> device)
Isn’t the essence of modern technology a trade between privacy and utility. Hell, when you login to Chrome you lose a lot of privacy as tech companies track what you search but you get quite a bit of utility out of the internet. I don’t think these assistants provide a ton of utility for your average person but for some they present a good trade off.
You think that would put Google off? YouTube alone already adds about 3PB every month.
This is quite interesting, but the payoff is at the end when they start talking about Google moving into sound, since they’ve already conquered written text and imagery.
Why do you think it must be an all-or-nothing binary?
The risk/reward varies for different people for different devices. I have a cellphone, but no interest in an internet connected house. Does that make my decision wrong? No, it just makes my R/R calculation different to yours.
I generally agree but if you want to get paranoid about it, the device could plausibly be running speech-to-text locally and then transmitting the compressed text of conversations along with the audio of commands. The local speech-to-text wouldn’t be as good as what can be done server-side, of course.
@vodalus you are incorrect as anyone who has delved into the development side knows. Wake words are identified via software on the unit itself, the developer setup allows you to pick between different implementations. You ever notice the wake word works before connected to a network, or if the internet is down. How is that done server side with no ability to connect to a server? Before that, as anyone with Wireshark or a router can monitor, there isn’t enough traffic for it to be sending data unless the wake word is spoken. Personally I don’t plan on saying a wake word right before sharing sensitive information I’m worried about someone hearing. Sure Google knows when I wake it and keeps that data to form a voice recognition profile, it would be crap without it and there isn’t enough internal memory to store all the samples, which you can delete at any time. Insert conspiracy about not actually deleting the data. The mini touch bug that triggered recording was taken serious enough they wiped an entire time period of data and disabled the touch to wake feature.
Everyone else must be gun running drug smugglers or something, not sure what use there is listening to me ask Google questions or calling santa.
Ohh, they requested the data from one of the devices where a major crime occured. Good, now I know to wake one of the devices if something bad goes down and I’m murdered and to unplug the devices should I murder someone else. Which I won’t, unless my tech overlords make me…
Can I change the wake sound or whatever to something else, like, clapping twice?
I am certain there are some perfectly wonderful people named Alexa, but Amazon ain’t one of em. The thought of wandering around my home saying Alexa this or Alexa that has for me a very high revulsion factor. Super high. Probably higher if I were also high.
Yeah, for whatever reason, home automation isn’t interesting to me… it’s just more stuff that will confuse the other people I live with, and adding more complexity that can potentially break down or need troubleshooting. I also don’t find turning the lights / TV / music on and off all that onerous of a task to do manually.
And the real data-collection device is your smartphone, which already sends out your position continuously, any message you exchange, the list of your contacts, pictures you take (with face identification), your diary, whatever you access on the web, what music you listen to, videos you watch, everything you buy online, etc…
Not automatically. It should just be a matter of MIPS and memory. The cloud-based systems were designed to accommodate the low-end phones of n years ago. With multi-cores and more ram, I wonder what the threshold is for doing what they do locally? I’d bet that a modern multi-core PC could do it.