What about some sort of browser extension that monitors the creation/accesses to canvas, local storage, and others, and displays when they are used?
This behavioral monitoring could then be crowdsourced into a database and the perpetrators named, shamed, and adblocked.
This kind of thing would be great! Unlike spamming, this is a highly point-source and command-based situation — the peculiar use of a lot of these tools for cookie storage (as the writers of the paper note with canvas) don’t generally overlap with other uses. Thus, if a Flash object tries to load the only purpose of which is to set an HTTP browser cookie from its LSO “cookie,” I know there are already tools to block this. But a generalized tool would, as you suggest, look at a variety of behaviors — and potentially alert someone when more than one is used on a given site and then scrub all the changes.
I was also thinking that a site profile could be useful. I go to page X and I want to know precisely the set of browser-side data that is stored and could be sent back (so that sweeps in ETags and the like). I’m not sure such a profiler exists, and might only appeal to more sophisticated users. But it would be a good step in letting users report bad actors, too.
There should be an option for them to exclude themselves from the database, though.
Let’s allow whitelisting on the end-user side. Otherwise the best way to get excluded server-side is not doing this.
I read about this the other day. Shouldn’t it be possible to set up a browser extension that introduces subtle errors in the canvas?
Also, according to the EFF, this is only 80-90% accurate, which is almost as good as zero. So the company must have one hell of a salesman.
There’s a lot to be said for running your own local DNS servers. Not least being that you can blacklist ALL traffic to some of these domains. I realize that in principle “services” like addthis.com offer some marginal benefit to social networking sites lke FaceBook.
I’ll just have to live with my crushing disappointment.
Good enough for drone targeting.
Yes, but that may cause other trouble and getting such a thing widely deployed is tricky.
That’s an incomplete statement. One developer says and AddThis seems to agree that it doesn’t create by itself a unique value some percentage of the time, especially for mobile. However, as noted in the paper and in my article, the canvas drawing coupled with other data increases entropy, and we’re only really seeing version 1.1 (AddThis’s adaptation) of the fingerprintjs’s approach. This isn’t an exhaustive approach yet, and we don’t know that it can’t be refined.
The more general point, and what I focused on here, is how powerful, seemingly innocuous improvements have a privacy and anonymity impact.
While I personally don’t appreciate being tracked with no option to disable or even understand/know about what information is being saved about me, I contend there is legitimate business use for such tracking that shouldn’t be revealed to end users. In the past, I’ve implemented tracking similar to this in browsers to combat credit card fraud and account hijacking. This is the kind of thing you don’t want to reveal to the user, because the sooner the bad users realized what you were doing, the sooner they will subvert your checks and just commit fraud differently. I know its one of those temporary sort of deals that won’t last forever (sooner or later technology changes and new tools come out) but I still consider it a legit use case for covert tracking.
Its such a difficult problem: how do you reliably and technically identify the bad users while giving the good users a choice?
Maybe privacy and security are too complex to be framed in terms of good guys vs. bad guys.
Ah, yes: security through obscurity.
Of course, the evildoers are generally the first to find out regardless. The average luser, on the other hand, never does and so remains vulnerable to all sorts of stealthy privacy violations.
If these trackers were really so wonderful, they wouldn’t be so hard to disable. The fact is that the perps pushing them know that people would disable them if they could – which is why the technology is 90% in common with other forms of malware.
The idea that catching criminals eliminates responsibility for disclosure seems absurd, and I do seriously doubt that tracking browsers for thieves is a deterrent or aid when people boot from Live CDs/DVDs, wipe machines, use chroot jails, etc., etc.
But I don’t know. Maybe there’s a case for it. However, the logic that this trumps user privacy, including laws that prohibit such behavior in some countries, is a little ridiculous.
The browser extensions would be good. How about sites that checked for such stuff for site visitors and removed or subverted it on request. And various scrubbing sites that did such things automatically in varying ways. Trollbots that search the web for sneaky sites to blacklist them or mount disinformation campaigns against them. And so on…
I don’t see how you can have a legitimate business use for unwanted tracking.
I mean, yes I see how someone would want to do this. But its hard to reconcile any legitimate business interest with said business deliberately deceiving its own customer base.
If my bank guarantees perfect (or near perfect) security by offering to silently track me, I might think about giving consent, I wouldn’t need to understand how they do it, but I would have to understand my role in it.
Security measures taken on my behalf? maybe not.
Tracking my activity on the internet?, The real question is WHY is it so important that I not find out that I’m being tracked? Its not for my safety.
Also, it wasn’t hidden particularly well, you can’t hang invisible fuzzy dice on my rearview mirror and blame me for finding it. “Awww, you went and found them, now I can’t keep you from crashing!”, Wait, what?
I fundamentally agree with this. Though there is a hidden assumption in posing the question this way. The assumption being that “giving users a choice” is a constraint to technical solutions. Privacy is actually assumed by the user and questioning a need for it is the real flaw.
The other hidden assumption is that the type of tracking we are talking about here is being done for the benefit of the user. There’s merit to what you say as philosophical speculation and as a nod to realpolitik. Its just that this isn’t a security measure, at all.
Extensions are not a long term solution: too many extensions go unmaintained as the browsers rev. (This is also why I say, “you can always change it with an extension” is a cop out, meant more to blow off complainers than as helpful statement.) It also puts a large burden on consumers to try and keep up with the latest threats.
As the paper notes, this is a cat and mouse game favoring the cat, played on multiple fronts. In the front matter for this paper alone they mention supercookie respawning based on Flash, ETags, the localStorage API, the WebStorage API, sessionStorage, IndexDB, cached images, and basic properties such as screen size and color depth. This is after major browsers recently closed a fingerprinting/tracking method that involved reading back link styles from the DOM (which they did by basically breaking the srandardized API). I think it’s safe to say the number of vectors is only going to grow with new standards.
Regulatory issue is tricky, too. If sites are located outside the country or region in which a regulatory regime applies and they lack any nexus or operations in your country, then they could ostensibly behave with impunity, especially if they’re selling digital products that would be impossible to indict or require blocking web site access, which is ineffective in any case.
Enforcing tracking behavior doesn’t even seem to work when companies violate existing laws, because the penalties are miniscule compared to profits in most cases.
i’ve been getting crazy with that HOSTS file