In practice, ‘API’ tends to get more limited use just because overbroad use obfuscates the issue(you wouldn’t say ‘the USB API’ when describing the protocol(s) used by USB); but if being an ‘API’ amounts to a patent-that-lives-as-long-as-a-copyright; there are a lot of things that are certainly close enough to being ‘APIs’ that they could be shoehorned into that category for legal purposes. SQL could certainly be described as ‘a database API’ without any real violence to the truth. It’s not a terribly helpful definition, so the techies probably wouldn’t use it; but it’s not a false one.
And down that path lies some genuinely ugly outcomes.
I literally just started learning C. Not a professional programmer, I cut my teeth on QBasic when I was a teen and learned a little Python later. PHP was my most recent experience with something approaching a programming language. What makes you paranoid, asks the newb?
I can’t speak for Nemomen; but C is notorious for giving you the enough-rope-to-hang-yourself freedom to make the sorts of subtle and dangerous memory handling errors that end up being exploitable.
It’s a very, very, powerful tool; and way less painful than anything lower level; but subtle and gruesome mistakes are very much an option.
Additionally C is famous for having many traps for newbie programmers. It isn’t only exploitable memory handling errors that are the problem, but just the language syntax and the things it allows you to do. Pointer handling in C is derived directly from register indirect addressing in assembler. C was originally developed on a machine with 18 bit addresses and address registers, and much early production work was done on 16 bit machines. With 64 bit machines, potential problems are worsened. Languages like Java were designed to prevent programmers adopting the worst practices of C. C++ created a whole new exciting world of ways for programmers to screw up via its inheritance mechanism; again Java was designed to clean this up, as were just about all the other managed code languages that followed.
PHP, of course, along with Javascript, offers wonderful opportunities to do things really, really badly, and it remains true that a bad programmer can write bad code in any language. But in C and C++ it is all too easy for a tired programmer to write bad code and miss it. And far too many programmers seem to consider it a badge of honour to write code when overtired.
My own route through programming was, roughly, 16 bit assembler → CORAL 66 → C → J2EE. I always felt most exposed when writing C, so I tend to agree with @nemomen. Though I prefer Netbeans to Eclipse.
Being called a Three Star Programmer is an insult, not a compliment.
C allows you to do incredible, and incredibly dumb things. Three stars refers to the syntax of pointers. So three stars is a pointer to a pointer to a pointer. The clever engineer will Pat themselves on the back for making their code so ‘compact’, and everyone else that has to maintain it will curse their names since they don’t know what the hell that code block does.
“Computers are like Old Testament gods; lots of rules and no mercy.”
I never underestimate my own ability to screw things up, but at least I’m nowhere near doing anything that can’t by fixed by sighing heavily and starting again.
Huh, I’d have thought assembler would be more dangerous, but I guess it’s just not as easy to screw things up.
In assembler you are expected to trace code paths. Which is easier because you have fewer levels of translation. So when you combine easy to abuse low level functions in a high level language… well, it gets ugly.
Especially since Oracle has listed him as someone they pay for consulting during this litigation. And IIRC he was consistently wrong in the SCO vs IBM litigation.
On the other hand, we’ve already got one boneheaded ruling on appeal. Here’s hoping they don’t get a second one.
If I’m writing C and I’m not paranoid, I’m doing it wrong. Thanks to malloc(), free(), the way strings are handled, and a few other features of the language, C makes it all too easy to write whole classes of bugs you don’t have to worry about in most languages. Those bugs will not only cause some kind of crash, but a set of those can be exploited by the bad guys as security vulns. There also are whole swaths of bugs where code compiles, but is technically undefined behavior that may not result in what you expect. In tiny programs this is pretty easily managed (provided you’re paranoid) but in large things it can get really hairy.
The things I work on in C for the job are mostly a mix of a userland program, and a kernel module. If I make a bad mistake in the kernel module, not only will there be a crash, but that crash will trigger a kernel panic bringing down the entire system. Customers do not like kernel panics.
Written well, you’ll get solid, predictable performance and a small memory footprint (unless you have memory leaks), so it’s great for some domains. The things I’m doing involve near real time packet analysis on multiple VLANs with lots of endpoints on appliances without a lot of RAM, so we need the speed and low memory footprint.
It’s a written document, which is perfectly sensible to copyright. That’s different from the behaviour being described in the document being copyrighted.
Consider - the printed board for a board game, and the instruction booklet it comes with, are both copyright protected documents. But the actual rules of the game described and enabled by the documents are not copyrightable - so anyone can make a game with gameplay in every way identical to Scrabble™ as long as they don’t use Habro’s colour scheme and graphic style on their board, and write the rules in their own words rather than copying Hasbro’s text.
Copyrighting documents / documentation is reasonable. The extension of suggesting that because a document with a list of APIs is copyrighted, no-one else is permitted to make a product that shares that list of features (developed independently) is a major problem for the industry. It feels like a horrible runaround to turn a copyright into a patent.
“Who are Michael Feldman and Robert Sewell and why should I listen to them?”
Enkita
The first remark I kind of agree with - Java has some of the most high risk features from C++ removed. The second one is simply childish abuse.
If Oracle does cause a move away from Java there will be unexpected consequences, including a lot of new bugs as code is ported to other languages, and as programmers retrain in languages with which they are less familiar. The logical thing might be for the US government to use eminent domain to take over Oracle on national security grounds, to prevent serious economic damage. But the whole argument is like the academics who push for the use of PostgreSQL. Porting existing databases to a new platform or trying to run two platforms in parallel is fraught with commercial risk. As the old joke says, why was God able to make the world in six days while banks take years to port an application from one mainframe to a newer one? Because God didn’t have an installed base to support.
My lawnmower has a CPU. You stick your hand in there and it will shut off and beep loudly. It bumps into you, it stops and goes a different way. It follows the rules laid out by its guidance wire and doesn’t go outside them. I don’t anthropomorphise it, I think of it like a sheep or a rabbit.
The Oracle business model is like a 20th century lawnmower. Even lawnmowers have got more sense than Oracle management nowadays. Because most people want their IT to work like my lawnmower; sensible, reliable, doesn’t bite your hand off.