An analysis of all those Internet of Things manifestos sparked by the slow-motion IoT catastrophe

Originally published at: https://boingboing.net/2018/05/30/responsibilisation.html

2 Likes

Wouldn’t this all be so much easier if we just designed a system with no reward for gaming it?

No matter how difficult we make it to get the next piece of cheese, some lucky mouse will always get it.

1 Like

HCl? Hydrochloric acid?

That would just be a system without a reward, though ~ right? If there’s any kind of desirable or non-desirable outcome, people will maximize ways to achieve the desirable or avoid the non-desirable.

ETA: As I’m thinking about it, even rewardless systems can have desirable outcomes. There’s no reward for commenting here, or editing Wikipedia pages, but I do both occasionally. The reward is personal to me, and therefore valuable. Can I game both? Possibly, depending on how much “reward” I’m getting out of it. And if there’s a rewardless system that I’m somehow obliged to work within, I may find ways to reduce my time commitment – also a personal reward.

Does that model come with the Magic Wand?

1 Like

Most humans don’t behave like that- at least not all the time.
Accumulation in particular is a cultural, not biological product.

Also, what is desirable? The idea is so diffuse as to not really have an answer, I think.

Do we not do this already?

I think what is “desirable” in a system will be seen by what people do to maximize efficiency, reward, personal reward, etc. Getting good grades is desirable, so some people maximize by studying hard, others by cheating. You put more effort into one path based on how severe the costs are for the other system. If it’s hard work for me to study, but easy for me to cheat, I’ll cheat. OTOH, if it’s not that hard for me to study, but it’s very likely that I’ll be caught and expelled for cheating, I’ll probably just study hard.

Having this conversation is work, but not too much work for me to say, “Eh, not worth responding.” Maybe on another day, my priorities will be different, and I would not even check to see if you had responded. There’s clearly some “desirability” for me, in at least responding right now.

Yeah, I’m pretty sure efficiency, cheating, fairness, maximizing desirable outcomes, and avoiding non-desirable outcomes are pretty much ancient human behaviors that can be seen in every human culture. Or at least, I can’t think of a human culture that is notable for the absence of these features.

Out of curiosity, are you thinking of a system that doesn’t have some reward tied into it? How would it work? That might help me understand where you’re coming from, because I can’t see it.

Is not the obverse of

And additionally I think that people in general aren’t maximizers, often quite content with just “ok.” Sure we avoid bad situations which might cause us pain, but we are altruistic enough to throw ourselves in harms way for others, to the point of sacrifice with little to no thought of benefit. Sure we rationalize actions, but not constantly. I’m certainly not in this conversation- absolutely no doubt that my time would be of more value spent doing something else (a few things, actually, but here we are, and I appreciate the interaction at any rate.)

FWIW the cultures that have been most efficient (as in produce the least waste/use the most) WRT resources are the ones which practice(d) the least levels of accumulation.

The non-gameable system is the one where you win at the start- the reward is life and the resources to achieve whatever that might mean to you.

Of course I realize that this is a hard problem- I just tend to think that our continued existence depends on solving it, or more succinctly if we don’t reorient our methods drastically we will be gone in a geologic blink of an eye.

1 Like

A lot of the more naive adventures in “IoT”(as opposed to the “no, the whole point is to build a dystopian hive of ubiquitious embedded surveillance, what?” genre) seem to live at the point where the fact that Metcalf’s law is true in broad strokes starts to break down in certain areas that aren’t as conducive to it’s implicit positive assumptions:

You can argue about exactly how greater-than-linear the effect is; but it certainly seems to be the case that the systemic value of compatibly communicating devices grows at a greater than linear function of their number.

What isn’t so true is that there is any reason whatsoever to expect the ‘value’ to be extracted evenly; or even by means of positive-sum interactions.

The wonderful world of “IoT” gives us a profusion of devices far too worthless, to their vendors or owners, to see any hope of securing them; but still compatibly communicating with the devices that are actually valuable; and so quite useful for various sorts of negative sum interactions with those devices; once recruited into botnets.

Excellent question! I think Human-Computer Interface.

I clicked on both of the links, neither defines it (presumably the paper on sci-hub does though). I noticed HCI had its own boingboing tag and went there, but there was only one other article so tagged and that one doesn’t use the acronym! However it was on the subject of Brain-Computer Interface, hence my guess above.

1 Like

Human Computer Interaction - HCI - research and design of user-computer interfaces.

1 Like

Maybe you’ve hit on a key distinction here - that an important property of a game-able system requires a situation with imperfect / asymmetric knowledge and when the actor’s sets of knowledge of both action (and results of), and motivations fully overlap then no gaming is possible. i.e. “it doesn’t make sense to cheat at solitaire”

(probably a bit of an artificial example, assuming a non-self-deluding, intact-memory-functions player).

It doesn’t make sense to cheat at solitaire, unless the point is to have something to do, in which case re-shuffling cards or whatever becomes a gameable option. It’s been a while since I’ve played any of the solitaire versions on a PC, but I vaguely remember an “undo last move” option.

In my mind, everything is gameable to some extent, because that just how humans are. We’re curious, problem-solving animals that learn from doing and tinkering. I’m fascinated at how much of history is filled with people trying things (inventions, travel, etc.), just to see what would happen. I think the Internet of Things probably started that way: nobody really needed their sweater or toaster connected to the Internet, but since you could, why not try it? Manipulating the system to do harm came afterward. Without bad actors, though, I’d assume that people would eventually game the IoT, anyway: “I learned that if I leave my sweater by the west window, it will be in full sun at 4:00 PM, which will turn on the air conditioner in the house. I’ll be home to a cool house at 5:00 PM…” And so on.

HCI = Human Computer Interaction

1 Like

I think the word here is that humans generally “satisfice.”

1 Like

This topic was automatically closed after 5 days. New replies are no longer allowed.