Researchers checked bugs into the Linux kernel to see if they'd get noticed. The bugs got through. Their uni got banned

Originally published at: Researchers checked bugs into the Linux kernel to see if they'd get noticed. The bugs got through. Their uni got banned. | Boing Boing

13 Likes

Thanks, Rob. I read this piece yesterday, and as a biologist couldn’t quite understand the definition of unethical experiment on humans.

You phrased the argument succinctly. Kudos.

10 Likes

bugulent code

cromulent

30 Likes

I want to also add my voice to say that that i think your framing of the issues is better than the ones I read previously. I likewise was having issues squaring this properly.

4 Likes

I think the key difference here is the introduction of bugs into the kernel intentionally. If you’re going to do that, then IMHO you have an obligation to let someone involved know about your plans, because the action you are taking is negative. I think this differs from a typical sting where you are asking folks to go about their daily business but recording/observing their normal behaviour (or perhaps engaging in that behaviour itself) precisely because you aren’t introducing potentially negative effects into the system intentionally.

I mean, no one is going to take kindly to someone “pretending” to put a “bomb” on a TSA scanner or to taint some foodstuff with dye or something to see if someone catches it - you aren’t getting away with those sorts of tests without having pre-contacted the organizations involved. Just because it’s software and a bunch of developers doesn’t change the fact that you’re intentionally disrupting their workflow without telling anyone.

Penetration tests work this way all the time for network security, and it’s in fact illegal in many jurisdictions to attempt to penetrate someone’s network without permission, even if your intent was to inform them of vulnerabilities. The only case where that’s acceptable is when you are sending normal, benign traffic to endpoints.

IMHO, like so many things in the world, consent is king here.

37 Likes

The Sokal Affair is a bad comparison. Sokal’s work highlighted an important flaw in a large body of “work”, and didn’t do any damage, unlike this.

8 Likes

These people were actively sabotaging an important work used worldwide just to see if they could, in some cases potentially with life and death consequences.

That is not passive research by any stretch of the imagination.

Edit: If I loosen the lugnuts on the wheels of ambulances to see if I can get away with it…that’s not research.

24 Likes

They were successful

Were they? From everything I read none of the buggy patches were accepted.

The problem is that 190 other patches submitted by their group are being reverted/rejected because people are suspicious something got through.

Checking in bugs was bad because it risked damage to the software, and it’s reasonable for the Linux Foundation to exclude an organization that’s conducting covert research on it that risks damage to the software.

It’s also bad because there was a possibility those patches wouldn’t get reverted, or testers could have gotten burned by the bugs, or all the possible bad consequences of software bugs.

But I’m not on board with this idea that it is unethical human testing. If it is, a lot of social science that doesn’t involve consent forms becomes unethical. Researching if MacDonalds will take orders off-menu? Unethical. Researching telemarketer decision trees? Unethical .

Not comparable.

Seeing if you can trick a MacDonald into accepting spoiled meat? Trying to convince a telemarketer into advising some sort of fraud?

Comparable, also unethical.

Behaving suspiciously to see whether police react differently to white or black suspects? Unethical .

Comparable and definitely unethical.

Realize that research involves asking grad students to go out and instigate encounters with the police. Do you want to explain why your student got a criminal record? Why they got shot?

The big factor getting ignored is consent. The researchers could have approached the maintainers, people who wouldn’t have been directly involved in the reviews, and could have gotten consent as well as safeguards to make sure those bad patches didn’t get into the kernel.

23 Likes

That’s not how research ethics works. Research like this is reviewed by an institutional ethics review board (known as an IRB in the US), who have to weigh up beneficence: basically, does the knowledge gained from the research outweigh any potential harm/damage caused by the research. It’s never a black-and-white question, and it often depends on how well designed the study is, and how well the researchers argue that we need to know the answer to the research question they are asking.

In this case, the reviewers sent their protocol to the university IRB, who wrote back to conclude that it’s not human subjects research. As someone who has conducted research on open source communities, and reviewed ethics protocols for other studies, I fundamentally disagree with this decision. This is absolutely human subjects research, and there’s a major reputational risk to the community they were studying. Further, it’s a deception study, and their research protocol should have been explicit on this. I suspect the IRB, like many IRBs have no computer scientists on their board, and don’t understand what an open source community is. I’ve seen that happen with my own university’s IRB.

Deception studies are frequently approved by ethics boards (my research group has had some approved), but they come with special considerations, especially around debriefing the participants.

None of this is to say the research should not go ahead - it’s entirely possible this study may have passed the beneficence test. But for the IRB to wash their hands of it is entirely wrong.

26 Likes

The main problem is that people at the University of Minnesota kept on doing it, after the paper and the fuss generated last year. Like WTF dudes?!

3 Likes

At the very least, their full disclosure document https://www-users.cs.umn.edu/~kjlu/papers/full-disclosure.pdf claims that all four of their “hypocrite commits” were dropped by maintainers and not merged into the kernel, except for the one that they failed to introduce a bug in.

The researcher’s concern seems to be that the maintainers mostly dropped their patches for issues other than the bug they’d intentionally introduced.

5 Likes

The irony is that it highlighted a systematic flaw in the system of journal refereeing and peer review, but no-one noticed because they just care about owning a particular kind of postmodern waffle.

It cultivated and fed a specific strain of skeptical hostility to critical theory. Which is fine, but a mixed bag when it comes to where the damage lay.

4 Likes

It’s misleading to say that the submissions were accepted. They made it through the first level of review. That’s it. They weren’t accepted by anyone higher up the chain, and they certainly never made it to release.

Jim Salter:

The substance is that literally the first person on the LKML to respond to their proposed patches frequently didn’t see a problem with it. That does not mean that the code was accepted at all, let alone pushed into a production kernel.

In fact, you can see Greg K-H (and some other kernel devs) noting the bad quality of those patches, if you follow the LKML threads. K-H and cohort were annoyed with Pakki and Pakki’s nonsense patches even prior to the news breaking that the crappy patches were part of what amounts to an unauthorized penetration test.

SOP in red-teaming is to inform people in the project that you’re doing it. They had an ethical obligation to inform someone higher up in the chain – Torvalds or K-H or whoever – to make sure that if the patches had gotten far enough along into development, they would have been refused before they made it into release.

Your McDonald’s analogy is off-base. The researchers behaved unethically, and their experiment proved nothing substantial.

19 Likes

I’m still seeing only positives there. My only possible criticism is that he wasn’t successful enough, but we’d need a time machine and a T-1000 who could speak French to do better.

This whole subject is a superspreader event for engineer’s syndrome. A lot of people who are right about “unethical” and right about “human subjects” but never quite right about both things at the same time.

5 Likes

University duo thought it would be cool to sneak bad code into Linux as an experiment. Of course, it absolutely backfired

5 Likes

We know the results of Sokal’s experiment. A little buzz, a few talking points for (grr) social conservatives, long term damage for the humanities. If Sokal had the goal of increasing editorial discretion, it failed, because in this age of predatory journals, anything and everything can get published.

3 Likes

Where? You can hardly blame Sokal for other people circling the wagons and doubling down on the nonsense.

2 Likes

The flaw isn’t in the peer review system as such, but in our perception of what it means. The academic publishing system is posited on good agents, and largely incapable of dealing with researchers who intentionally camouflage bogus results. Even results from well-intentioned researchers can have mistakes sneak through; if the result is interesting enough to warrant subsequent research, that’s when mistakes are most effectively uncovered. The idea that once something has gone through peer review it is unimpeachably true is completely wrongheaded.

I’m editor in chief of a math journal, and one consequence of the Sokal affair is that it has become a thing to submit bogus papers to journals. It is pretty easy sorting those out in my field, but I imagine that outside of the sciences it is much trickier, not because the subjects are less serious but because they are harder.

9 Likes

The problem with getting consent regarding introducing bugs into Linux is that you’re getting consent from (ahem) the people doing the checking.
Add in the fact that you will be writing a paper with the findings and who here wouldn’t be surprised if the bugs were easily located and plenty of people volunteering to contribute to the paper.
Think about surprise shoppers or restaurant reviewers. If you tell the restaurant that “hey that dude over there is a restaurant reviewer, don’t treat him any different…” what do we think the outcome would be?
Why is everyone shocked that consent wasn’t asked for before doing this kind of test? The fact that they got banned was because they exposed a weakness in the chain, which is the kind of reaction you expect of a big company more concerned with investors than an open-source project. THAT is more telling than anything else, and would be a good subject for a research paper, actually…