Human reviews of code are about the least efficient ways to ensure code quality. This is why good static analysis tools are needed – and I’m surprised an issue like this that seems like it would be obvious to a tool wasn’t caught by automation.
“Checked bugs into” is an odd construction.
Researchers at the University of Minnesota checked deliberately bugulent code to the Linux kernel to demonstrate how a malicious actor might slip past the open-source review process.
I don’t even understand the usage of “checked” here. Checked code to?
To “check in” code is a standard software engineering phrase which means submitting code to a source code repository (or to review via a mechanism like pull requests in github). The latter of your examples is arguably missing the word “in”, but otherwise it’s a cromulent usage for software engineering.
There COULD be real research there. Some folks like Latex. I know I prefer to have everything formatted “just so” for my solo time. Way better than word.
The REAL give away is that who is going to publish a paper with TWO Kevlins.
Also, they really should have gone for Kevlin, Manbodo…Nothing like a man bod in cold latex to make everything OK
Pretty much any software project out there will use some sort of version control system (VCS) to keep a catalog of changes to the code over time. In VCS nomenclature, when you “check out” code, this means you’re making changes to that code, and when you “check in” you’re publishing those changes back to the VCS. (Now imagine this at a scale with millions of lines of code, and tens of thousands of developers of varying skills from anywhere in the world and you have any big open source project like the Linux kernel.)
Typically before you check in your code, you have to go through a check in process. This often involves things like automated builds to make sure your change didn’t cause the build to fail, automated quality checks using static analysis tools, and one or more humans saying your change is good. These processes are not flawless – humans make mistakes, automated tools can miss things, timing conditions can cause things to break when everything coalesces. (Worth noting that the git VCS uses slightly different verbs – like “commit” instead of “check in” but any software engineer will know what it means regardless.)
In this case some researchers tried to use social engineering exploits against the check in process to knowingly publish bugs into the Linux kernel. These bugs were missed by any automated tools and also slipped by the human approvers. Thus proving, I dunno, that human reviewers make mistakes?
It’s grotesquely juvenile. Obviously submitted by pranksters
I found it by following a “what papers cite this paper” type of link, found TSEJ, and realized that most of the papers published by TSEJ bordered on crap. And, then, I found the Latex paper. I shudder to think of what’s in the other Juniper published journals.
His paper ( supposedly inadvertently) supplied ammunition to right wing fringe groups who go on and on about leftist indoctrination in the academy.
https://physicstoday.scitation.org/do/10.1063/pt.5.8203/full/
Thats kind of a bad analogy, its more like checking a restaurants food safety standards by putting bad meat in there grocery delivery. They got banned because they were sending bad patches from a number of emails and refused to stop when asked
They keep their identity secret while consuming a product. This is a completely different situation than if, say, you decide to introduce health violations into a restaurant kitchen to see how the rest of the staff reacts.
Absolute nonsense. They were banned because they did an irresponsible thing that cost people time, and energy, and risked putting actual flaws into heavily used software.
These kinds of things can be done responsibly. Off the top of my head: Notify those at the top, and provide detailed information on exactly what would be submitted, when, and by whom. Even better if the patches also contained useful code actually written by those with extensive experience submitting to the kernel to avoid rejections for unrelated issues and to avoid completely wasting reviewers time. Monitoring by those at the top to ensure the identified patches never get anywhere near the mainline kernel. Rapid resolution and informing of all affected volunteers as soon as possible to minimize everyone’s wasted time.
There…sheesh this stuff isn’t hard. Either the people doing this stuff are totally incompetent, or actually malicious.
Sokal didn’t give ammunition to the fringe groups. To extend the metaphor further, the ammunition was already there, and his writing just revealed that it was there and urged people to clear it up.
If it hadn’t been for Sokal, the culture warriors on the right would (and did) just make things up to push their arguments instead. Pointing out bad reasoning and acceptance of ideas that were “not even wrong” shouldn’t have been about “sides” in a culture war anyway- it should be about rigorous scholarship and promoting ideas that describe reality accurately.
Sadly, even though his warnings were widely spread, Sokal’s warnings appear to have failed. The nonsense is now ascendant and goes further than he could have imagined, seeping even into popular culture.
I don’t think they have failed. Sokal managed to wake up some of the postmodernists in the 1990s “Science Wars” like Bruno LaTour, who has gone on to say he realizes that postmodernism has caused great ill for the public understanding and trust of science, and that actors on the Right have learned from the postmodernists and use essentially pomo arguments to promote Creationism and climate change denial.
Is this even unusual?
Like, it’s unusual for it to be part of a research project that’s trying to get published in a journal, but don’t trolls submit troll patches all the time, all day every day?
I mean look at how much energy Wikipedia has to spend on reverting trivial vandalism
Were they? From everything I read none of the buggy patches were accepted.
One was, but doesn’t really count because it wasn’t actually buggy.
The problem is that 190 other patches submitted by their group are being reverted/rejected because people are suspicious something got through.
One of the other patches widened a data race. There was a claim that 3 of 4 patches reviewed from one of the group members contained security holes, and this appears to be what triggered the mass revert. I’m not sure the claim is correct. But there were a couple of recent patches from the group that were, in my opinion, nonsensical, and one of them was accepted when it shouldn’t have been.
If you dress up as a bat and punch criminals because the justice system has failed and organized crime has taken over, and if the people applaud you for doing so, then I guess you might think of yourself as a hero.
But if you dress up as a bat and punch criminals when the justice system is working pretty well, crime is low, and people are more scared of you than they are of the people you are punching, then you really ought to recognize you’re just an asshole.
I guess ultimately history will be the judge.
i have no idea, and am curious about the real answer.
my guess is that it takes enough work to write code that appears to fix an issue while introducing planned bugs that the answer is generally no.
there are definitely hackers out there who will target popular software to introduce backdoors and the like. i guess i wouldnt be surprised if the nsa and russian state actors have tried with linux. the fact that it’s open source and does have so many eyes maybe makes it less likely to be successful than something closed like solarwinds.
Getting kernel patches into the linux kernel requires a significant amount of work and reputation building. It also has a workflow that is offputting to many. The reason these people got this far was because they traded on their university’s name and previous work with the linux kernel. The result is that they now basically are banned from ever submitting kernel patches again, and numerous companies I know have removed UMN resumes and applications from their files. This is going to blacklist a number of compsci students who just picked the wrong university.
This topic was automatically closed after 5 days. New replies are no longer allowed.