I’m spinning this off into its own thread because this is no longer about baby monitors or why or even if people should use them. (For the record, I don’t think its a bad thing )
When it comes to internet enabled devices, is the average consumer warranted in expecting security and privacy?
I don’t mean that people shouldn’t get privacy and security or that they don’t deserve it, I mean:
Given that electronic devices, and especially network enabled devices offer convenience with privacy as a trade off. What is the foundation for the average person’s trust that “smart” technology will be secure?
If Costco is selling internet enabled baby monitors, can we safely assume that because it is a product sold in stores, then it is somehow well made, fit for purpose and risk free?
How do we come to trust technology that hasn’t proven it can be trusted?
I think you’re taking a step too far - I don’t think the average consumer even THINKS about whether or not they trust the security of their technology. If wireless routers didn’t mostly come with security enabled by default (or as part of the “let’s set up your new router” wizard for dummies), I know that the vast majority of non-techy people I know wouldn’t even have that basic level of security set up.
I think you are asking the right question but in the wrong way. Its not “technology” that is to be trusted or not but the people behind the things or more accurately, how well is trust enforced on this thing as part of the social contract/legal system. For example:
TLS is nifty technology but that isn’t what makes it “safe” to shop with a credit card on Amazon. What enables trust (at least in most countries) is that there is a system in place to deal with fraudulent use of a credit card.
Lots of safety technology is in your car but not because the manufacturer cares so much about your safety but because governments forced them to implement lots of this stuff.
Cars and credit cards are pretty easy to make “trustworthy” compared to all the beep boop that fills the marketplace. Plus the potential harm which comes from insecure/misconfigured beep boop isn’t as visible in terms of impact as “my money is gone” or “my child is hurt or dead”.
I’m bummed I missed this awesome thread before. Such interesting questions!
I agree that asking about trusted relationships more than smart engineering is a touch more central question.
So long as a private interest can exert more exclusive power over a technology potentially adverse to the public trust, then the key problems remain that @tachin1 identified for the thread.
And to the extent that private interests potentially adverse to the public trust lack that sort of power, the problems diminish.
For example, public roads, water and power serve rural communities though doing so isn’t profitable.
Whether “consumers” really want or expect the services is less relevant than whether, as citizens, they expect them.
Democracy is undermined when we presume that “greed is good” — that private profitabilty necessarily serves the public interest.
Can I suggest you step back a bit? Operating from an idea of a “greed is good” motive is not what this is about as I see it. Its not at heart an economics issue or “lets get angry at capitalists” issue.
“Secure” technology is possible but under limited conditions. Yes those conditions are expensive, but more importantly they are complicated. Complicated not just by the technology itself but the underlying complexity which has bedeviled information systems since the start. For the most part these issues scale poorly when it comes to beep boop.
I’m sure some of our regular techno-messianists will contradict me but thats how I see it after a couple decades working in security.
I think we largely trust them because the rate of abuse is low. Examples of invisible technology that most people wouldn’t even identify as tech:
Fragile glass windows.
Simple tumbler locks on our doors.
Garage door openers.
Wireless house phones.
Our strongest security measure have historically been social, inexpensive, and convenient, not designed for resistance against determined attackers. The embedding of tech into everything just creates an environment where tools and malicious manipulation or capture can be done more easily. The convenience and relative inexpensiveness of computing power in our devices generally means that the ability to abuse them will become cheaper and less expensive as well. Tech brings great leverage, leverage which can be used for good or bad.
The hard part is that so much of “security” is just confused secrecy. Proprietary walled off systems has been the historical standard (locked vaults). We know that digital secrecy != security in any way. Unfortunately digital security is way more complicated than a clean head of lettuce and much harder to test for. I’m not sure if the tension is trust and smart, as much as it is closed (private security systems for profit protected by govt mandate and legal monopolies) and open standards/systems.
Can we introduce and maintain consistently high open standards without falling back to govt coercion and enforcement which tends to lead to political power grabs and contracts awarded and politics and $ rather than being based on merit and public good.
Clean head of lettuce that doesn’t make you sick is a great model, but you look at the FDA when it comes to drugs and weighing public health risks and we end up with very useful drugs coming off the market or being denied entry because of a minority of drastic side effects or because of the very political approval process for approval of new drugs. When the decision making gets that complicated and tied to power and $ then many voices call out that there is too much regulation. Digital security is much more akin to drugs than lettuce, or maybe I’ve just overlooked the “free the lettuce” campaigns.
I think this works. Open standards and systems would weigh on the public trust side — generally — and proprietary systems on the corporate, for-profit side. I think the former generally has the better arguments.
I think you’re right again here though it’s more a matter of degree.
The complexities of for-profit genetic engineering, mass farming production standards (e.g. herbicides, pesticides, water use, etc.) and discriminatory labor practices make arguments about non-local agriculture more analogous to the drug arguments than one might first expect.
For me, an insufficiently democratic regulatory regime is an argument against excessive for-profit corporate control more than an argument against regulating complicated stuff.
Some folks who work in banks can chime in here, but IIRC, when I bought my house there was a section called something like, “Plain language facts about your mortgage” or somesuch (obviously that wasn’t the name, but you get the point?). That is, lenders have to say, in plain language (and not diamond-dense legalese) what the loan entails, what’s going to be paid by them, by me, and how much it will cost over the long haul, etc.
I would love to see something similar enacted for companies building Internet Of Things devices like baby cams and such. “This device will contact Wrecksdart Technologies (WT) every hour on the hour to provide WT with, 1) the local time, 2) the temperature of the device, 3) the loudest sound recorded by the device within the last hour”, etc. etc. If they want to serve advertisements to the user, that will have to be mentioned. Does the new IoT TV send back data regarding every show that’s viewed by the consumer? Then they have to say that. Technology has reached the point where the most innocuous-seeming device can essentially record every utterance, electronic or otherwise, of the user–producers should have to tell us exactly what data the device grabs, in plain language.
Otherwise, we’re back to “Trust us! We’re Sony!” oh, hey, we’re gonna install a rootkit on your computer, it’s really nothing you should worry about.
It certainly doesn’t help that consumer electronics pushers are happy to gloss over(and sometimes be outright misleading about) glaring security problems; but my impression is that a great many people think about a complex system in terms of solutions: they want a baby monitor, or wifi, or whatever to do what it is supposed to. When something they buy doesn’t do that, they are usually pretty good at recognizing it and either bodging at it until it does, or returning it.
Fewer people(and fewer still competent enough to be dangerous/useful) look at a complex system and think “Hmm, nice house of cards you got there. I wonder if it will break when I prod at it?”
It is hardly impossible to secure things in the first frame of mind(most of routine “unpatched systems are bad practice and bad practice is broken, I must fix things until we are in compliance with good practice” type IT security checklist stuff is pretty much that); but it is a great deal harder to imagine exactly how unpleasant a system’s failure modes can be if you are thinking in terms of solutions: your default worst-case scenario is either total failure or frustrating intermittent failure to do whatever it is supposed to; not something that is effectively extreme success, just in the wrong direction.
If you are looking at the situation in terms not of the problem you want solved, but of all the little moving parts grinding together in an attempt to solve it, it becomes far easier to envision assorted malicious re-purposing, dangerous assumptions, and similar.
This isn’t exclusive to tech stuff, a variety of unpleasant accidents and occupational health and safety problems end up boiling down to somebody not thinking through the fact that ‘failure’ can get a lot worse than ‘not doing its job’; but the complexity and versatility of IT gear certainly helps, as does the fact that so many security issues differ only from ‘working’ in that the wrong person is being allowed to execute programs or given access to data, so almost all the symptoms of success are still present; because the system is, mostly, doing exactly what it was built to.
Basically what I was getting at but I’m not famous enough to be published.
In the trade we talk about information security as a triangle of confidentiality, integrity and availability. Unfortunately too many people confuse confidentiality for the whole of security. The conversations around “privacy” are a classic example of this confusion.
Lets look at this again from some different angles
Dirty lettuce is a threat to health which is in fact a form of security in that it is a threat to the availability of being able to function normally or the general integrity of one’s health.
beep boop tech may put one’s information at risk of breach of confidentiality, “smart” items which are poorly secured may put the integrity of your data or sense of well being at risk of disruption.
The risk to life and limb is hardwired into our brains whereas the data integrity/availability issue is more abstract. The first will be more compelling for social regulation whereas the second less so.
This is not to say that societies should not regulate the concerns around beep boop but that it is far more difficult to do so. With food safety it is reasonably easy to define what constitutes a relevant threat and develop a system of inspection & regulation around the potential threats. Far less so for data. We can’t even agree on the risks much less come up with a general system of weighing the risks so how can we legislate and inspect this issue?
At the risk of possibly misunderstanding you here, I’m afraid you may have wandered into the very trap I cautioned against above regarding “techno-messianists”. While it may be true as an axiom that open standards and systems can be more trustworthy (for various definitions of trustworthy), that doesn’t mean that open is a fix for the problems around beep boop because in the end, this trust entirely depends on the availability of updates and the ability to deliver them in a meaningful way. See also the problem of vaccination.
What you suggest here would be wonderful. The difficulty arises in defining the potential problems. Mortgages are relatively “simple” contracts between the providing entity and the customer. What goes on behind the mortgage between the providing entity and other entities can be horribly complex. Remember Lehman Brothers? Lehman was in the position to be able to say “Trust us! We’re Lehman!” until they weren’t. The problems here are very similar in that the complexity of back end code behind a device and all the interrelations of various libraries and other devices which the device interacts with can affect the trustworthiness of the end device.
The license isn’t the problem. The issue is “how to ensure that patches are delivered correctly”. Take for example the various Open crypto transportation implementation libraries both server and client side. They have a generally good history of fixing bugs. For every maker of some device that uses them however, how to you make sure those updated libraries and the code which depends on them gets patched? X thousand servers and Y million devices. What about the companies that go out of business or exist really only as a name on a box? etc. etc. etc.
Lets also not even get into how to enforce that end consumers of this stuff have it on by default and can’t easily disable it.
I really dislike the term “internet of things”. Dislike it to the point where I get an upset stomach if I have to type it or IoT too many times. So for this thread anyway and probably for the future, I just decided to call all that connected crap “beep boop” since damn near all of it makes some annoying little sounds.
I take your point, but we are not dealing with genetic code that wasn’t written by human hands, we are dealing with code that was written by a person, and connects to other devices via human-generated code. Complex? Likely. Impossible to define, even in a general sense, if need be? Hardly.
I’m not asking for my router’s creator, Company X, to tell me of the myriad ways it opens/closes ports or routes traffic to and from my computers to the internet; I’m interested in hearing, to use your information security triangle analogy, how Company X’s router potentially violates my confidentiality. That is, what data of the consumer’s do you, Company X, take for your own and deliver to your own servers to use for purposes Company X defines. And given that companies use data on our activities ALL THE TIME, it shouldn’t be that difficult to say, “We slurp your location and book-purchasing data to our servers for resale to advertising aggregators”, or something along those lines.
I fully expect tech producers will then employ their vast armies of lawyers to dream up mealy-mouthed word salad (maybe they’ll hire Palin to write it) such that consumers misconstrue exactly what’s being taken from and about them, but there has to be a starting point somewhere. Visio, among numerous other tech producers, happily slurps up the viewing data of their consumers and directs that traffic to specific network points–someone at Visio was told to build that system, and they should have to tell us about it, too.
Honestly thats less my concern than the issues I covered one above in reply to @hello_friends. I expect that most beep boop will phone home for various things anyway. For me anyway theres nothing at risk by that. I worry more about flaws in the code of the TV which might give someone easier access to something else on my LAN and potentially cause issue with integrity or availability of my resources.