Someone kick that motherfucker down an access tube.
Looks like they flogged their zero-day exploits to one more repressive government.
I agree selling exploits makes you scum – as does contracting with someone who sells them – but that’s one big step short of “war criminal”, which I think has something to do with conduct in a war.
We will need potent systems of disclosure to recover from the tremendous damage done to the US and the Internet by the NSA. Many of the actions of the NSA have rendered our existing systems of disclosure impotent. Both whistle-blowing and it’s near kin vulnerability disclosure have been severely damaged.
Once, a security researcher could publish information on a vulnerability with relative impunity. Now, every publication comes with a substantial risk of jail. The NSA has worked hard to create and sustain vulnerability on the internet. They consider exploit to be their fundamental right and duty. I have no proof that the NSA tampering has extended to pushing for the prosecution of Independent security researchers. But, the current attitudes toward vulnerability publication are dang convenient for the NSA. Just as the current attitudes toward whistle blowing have supported their unbridled excess.
So, it is dang strange to see the NSA supporting the VULPEN group. They should be natural enemies. I would expect the NSA to do all in their power to destroy VULPEN. The revelation that the NSA provides material support for VULPEN indicates that the NSA approves of the widespread creation and hoarding of exploit. Could it be that the NSA’s real objective is to destroy the internet? Could it be that they don’t care who does it, as long as the Internet is destroyed?
Hello,
Regardless of what you think about VUPEN and companies like them, I am unsure of why anyone is surprised VUPEN sold a subscription to a US federal government agency. It is not hard to believe they would have contracts with similar agencies in other NATO countries, and possibly some of the ASEAN and OAS nations as well. After all, I don’t think their typical client is the Boy Scouts.
Regards,
Aryeh Goretsky
If you call yourself the Darth Vader of anything, you are the Jar Jar Binks of that thing…
There’s loads of wars. Pick one.
i don’t think the surprise is who VUPEN are selling to so much as the fact that the products of VUPEN’s efforts are being used against the american people by their own government.
in essence, what we’ve discovered is that the safeguards VUPEN boasts about are worthless. their attempts to find moral high ground by restricting their customer pool to NATO was all for naught. there is no moral high ground to be found in selling for offensive uses.
Hello,
I read through all nineteen pages of the PDF file, and while I did not understand all of the minutiae, I did not see any mention in the contract stating that the software (vulnerability reports, POC exploit code, whatever) was specifically being purchased for use against US citizens by the government.
I did, by the way, find this little gem:
352.227-9000 SOFTWARE-REQUIREMENT (AUG 1996)
The Contractor warrants that, to the best of its knowledge and belief, software
provided under this contract does not contain any malicious code,
program, or other internal component (e.g., computer virus) which
could damage, destroy, or alter software, firmware, or hardware or
which could reveal any data or other information accessed through or
processed by the software. Further, the Contractor shall immediately
inform the Contracting Officer upon reasonable suspicion that any
software provided hereunder may cause the harm described above.(End of Clause)
which I personally find hilarious, given the nature of what is being purchased (and, unless it was discussed with the Contracting Officer probably means that VUPEN is in violation of the contract).
If there is another document or article which states the “VUPEN Binary Analysis and Exploits Service 12 months subscription” was for use against US citizen, please let me know.
Regards,
Aryeh Goretsky
For what its worth, lots of us in the security community were never really all that enthusiastic about the cult of full disclosure to begin with. As they say, there’s no honor amongst thieves. Not that all who found/disclosed software vulnerabilities were bad actors but there certainly have been plenty all along.
Associating vulnerability disclosure with whistle blowing is nonsense. Software vulnerabilities are not automatic signs of wrong doing or illegal/unethical actions on the parts of those who create and publish software.
[quote=“Israel_B, post:13, topic:10144”]
For what its worth, lots of us in the security community were never really all that enthusiastic about the cult of full disclosure to begin with. As they say, there’s no honor amongst thieves.[/quote]
You and I must swim at different ends of the security pool. Perhaps, your side contains a lot of software vendors?
I do security for a research university. Almost all my security colleagues are in favor of aggressive disclosure. In fact, all of us agree that once an attacker starts exploiting a vulnerability, it should be disclosed. We have also experienced difficulties in getting vendors to fix vulnerabilities.
But, perhaps our local experiences are exceptional. Let me share a few of my experiences and maybe you can share some of yours so I can see why you feel that limited disclosure is productive.
-
My first experience with the disclosure debate was almost 20 years ago. This was before the modern practice of aggressive disclosure. I found that the university’s DNS servers were controlled by external hackers. The investigation revealed the exploit pathway involved a vulnerability that was known to the vendor for over a year. But the vendor felt no pressure to fix it. Once we had resolved the compromise, we widely published the details. We felt this was the only way to ensure than nobody else would be hurt by this vendor.
-
Recently, we purchased a multi-million dollar communications product. As part of the purchase, we did a security evaluation. We found a number of little issues. We also found a serious issue that would give an external attacker complete control of the system. The vendor has sold this product to thousands of organizations across the world. The scope of possible loss is in the billions. We produced a working exploit. The difficulty of the exploit required about 5 years of web programming experience. We demonstrated the exploit to the vendor and the vendor automatically assumed that we would eventually publish. The vendor addressed the issue within a few months.
-
A university is a quasi-government. We purchase many software packages that are designed to facilitate government workflow. We recently analyzed one of these products and found a serious issue that allowed an external attacker full control and access to all workflow. Not only could an attacker monitor and redirect workflow, they also had access to all the sensitive info in the system. We didn’t need to develop an exploit. The level of difficulty was 1st year high-school. This product is widely deployed all over the country. We demonstrated the issue to management, and the vendor. The vendor insisted that there was no problem. When we pressed the issue, the vendor threatened to sue. We demonstrated the issue to our local branch of government. Our local government guys felt they needed a resolution, or they would leave the vendor. The vendor responded by threatening to use their influence with government to have us arrested. We pointed out that every place they had influence, they had also created a victim. Eventually, the vendor agreed to fix the problem. Someday. We armored up our local installation to protect against attack. But, all we can do for the rest of the country is pray. Or do full disclosure. Which would land us in jail.
We are faced with widespread, crippling vulnerabilities in the internet and it’s software. Some of these issues are the result of deliberate collusion between government and vendors. We have a large branch of government that is acting to create and preserve vulnerability. Based on my years of experience, I believe that aggressive, forced disclosure is the only method that will work. Cooperative disclosure will simply be diverted or assimilated.
Now, how do things look at your side of the security pool?
Guess we do. For the last 20 odd years almost all my work has ranged from an early NYC ISP to corporate side in Tokyo.
Perhaps, your side contains a lot of software vendors?
Nah. I put in a few years at Trusted Information Systems (long gone now) and then NAI which gobbled them up but nothing since then.
By my experience, which doesnt include academia, I’ve been the fire fighter, the one cajoling management of the need to pester venders, or to be the one who had to do the pestering at the same time as applying make shift controls elsewhere in the network.
Maybe you guys in academia enjoy a different pace or perhaps no one’s job is on the line when things go horribly wrong. Its nice that you have been responsible with dealing with vendors but my experience is that for every one person who is in any way responsible there are 20-100 who aren’t.
Over the years I’ve become less impressed with the skills of breaking things and more impressed with the skills of dealing with what got broken. More and more folks try to make some name for themselves as a “researcher” by breaking things but I’m just not of the opinion that they are in the end contributing to the common good if thats all they have to offer.
Very good point Israel. I think we agree on the fundamentals. I think we need a professional code of conduct. Like physicians or engineers. I certainly agree that being a security professional is no excuse for unethical behavior.
Sometimes I despair of the security industry. It seems to strongly reinforce attack and mindless response. I think we frequently become so obsessed with the dance of attack, that we fail to plan for a better future.
I used to think it was testosterone enhanced blindness. But now I just blame the NSA
For example, this spring, when we analysed the SANS 20 Critical Controls (originally written by the NSA,) we found some inexplicable oversights. See Appendix 4 at the end of this Google Doc. Of course, they had to prioritize. But some of the oversights appear to be pretty important. As currently composed, the Controls are hard to prioritize for local situations. And, they do little to create a better future. We wrote 3 additional Strategic Controls that we felt had greater value than some of the existing Controls. Our institution would value them at about #1, 3, and 6. They are:
Critical Control 1: Unity of Vision
Security is a MEANINGFUL Assurance that YOUR goals are being Accomplished. Most security failures are enabled and enhanced by disagreement of purpose. Are the fundamentals of management in place?
A. How does your organization create a sense of community?
B. What are your Institution’s Goals?
C. How are those goals propagated throughout the organization?
D. How do your security actions promote your institutional goals?
E. How do your security actions provide assurance to your institution?
F. How does your institution reward long term loyalty?
Critical Control 3: Enable a Better Future
This control assumes that our actions affect the future. Do your actions enable a more secure future?
A. How do you increase the cost of attack?
B. Do you report attack to the remote ISP/attacker?
C. How do you coordinate with law enforcement?
D. How do you decrease the cost of defense for yourself and others?
E. How do you reduce the motivation for local attack?
F. Do you disclose vulnerabilities to others?
If so, will your institution protect it’s people when others attempt to punish disclosure?
G. Do you facilitate others disclosing vulnerabilities to you?
H. Do you help your peers improve their security?
Critical Control 6: Informed Response
This control assumes that Security is an ever changing landscape. If you don’t correctly respond to the current challenges, you will not survive. How responsive is your security?
A. Do you detect and measure attack trends?
If so, do you communicate the results to management?
If so, do you communicate the results to your peers?
B. Do you track the major venues for attack disclosure? (DefCon, SmooCon, BlackHat, etc)
C. How many days does it take to communicate a security concern to the highest levels of management?
D. How many days does it take to implement an approved security initiative?
Its been tried before, especially in the disclosure arena. Unfortunately since sites like this one will dignify any teen with a fuzzer as a “researcher” the chances dont look good.
I used to think it was testosterone enhanced blindness.
Maybe the solution is to start a program to get these kids laid
We wrote 3 additional Strategic Controls
Nice stuff if you can convince management to stick to them when the fecal matter hits the fan. Honestly its hard enough when even basic stuff can cause the business side to scream about inconvenience.
[quote=“Israel_B, post:17, topic:10144”]
Nice stuff if you can convince management to stick to them when the fecal matter hits the fan.[/quote]
Exactly. Sticking to your goals when attacked is the heart of defense. Ultimately, it is the only thing that matters in security.
Thanks for the discussion Israel_B. I hope you forgive me one last dance before the thread shuts down.
If you don’t stick to your goals when attacked, then you have lost. The attacker may not have won, but you have lost.
Your organization adds value by sticking to it’s goals. But this is more than just a matter of value added. Goals are the spirit of the organization.
We believe that Security is a meaningful assurance that your goals are being accomplished. The details are transitory. But, without goals, security has no point.
This topic was automatically closed after 5 days. New replies are no longer allowed.