Lots of older source code - say, for the Apple ][ - is worth reading just for the comments.
/ I recall a comment from Steve Wozniak: “Something magical happens” in a serial port routine.
Old code was like that. It’s only now programming is being deskilled and Taylorised that people get assessed by managers on things like comments, whereas in those days it was engineers and scientists who understood one another.
I wish I had kept my copies of the original Unix manuals - it’s hard to believe they would get through nowadays. Man page gems like bug: may howl at moon helped to keep morale up.
It’s hard to compare today’s modern languages with old school programming since the hardware really dictated the software design. Code had to be elegant, well thought out and tight as a MoFo! It’s easy for coders today to be loose and sloppy since we don’t have to worry anymore about resource contraints.
I await the verdict of conspiracy theorists - cue the “There’s no way this code could get people to the Moon [something I say with no knowledge of programming]!” comments.
Sells bigger servers, anyway.
I have heard the excuse for crap code “Well, it’s not as if there are any resource constraints…” Still crap code, though.
Compare with car industry where a 2000 Boxster did 18mpg and managed 217 BHP, and the current model achieves around 28mpg and 300hp. What was acceptable in the days of cheap oil is not acceptable now. In the same way I suspect that as the electrical consumption of server racks increases - there’s a story that a new server farm in Ireland may need as much electricity as all of Dublin - the time is going to come when code optimisation gets important again. After all, Apple has built a pretty big business out of a mobile phone system with better optimised code than the competition.
I’ve just written a mildly complex program in PIC assembler using just over 400 14-bit words. It isn’t rocket science but it calculates the correct time to water the greenhouse each day based on light intensity, temperature and humidity. And it is far from optimal.
If you don’t need images or blocks of text, programs can be very short indeed.
Ah, but that’s logic - the actual objections will be utterly irrational and ignorant, from “I saw some programming in a movie once/I make web pages, that’s not what it looks like” to smug dismissals because the code makes no references to the Moon being made out of cheese (having been constructed there by alien fondue enthusiasts).
Isn’t that always green screen with a Unix core dump scrolling down it?
I must say that it is rather well commented. Even so, the assembly language is so different from any of the half-dozen computers I’ve programmed in assembly, that I can make neither heads nor tails of the actual instructions.
I may have to go find the AGC programming manual to figure it out.
I actually think, technically speaking, there is no way that code could get people to the moon.
That is why they needed a Saturn V…
It looks like the design was very influential on a number of subsequent processors. The PIC smells strongly of it, but based on the dates its hard to know whether it influenced the PDP-8 or vice versa. The PIC is a Harvard architecture but the concept of timers and so on occupying the same register array as RAM is there, as is the horrendous bank switching mechanisms. Like the PDP-8 it started with 8 basic instructions and went on from there.
The 15-bit data, confused handling of overflow, ones complement numbers and spaghetti instructions look like the work of practical engineers rather than computer scientists. But it also looks like it would be amusing to play with.
My admiration for people who could get a virtual machine running on it and write a program with tens of kilobytes is enormous. Once LSI came along and the standard model of cpu with I/O instructions and separate I/O devices came along, things were much easier to grasp.
Personally, my favourite old time processor was the TMS9989, which was extremely regular and well behaved, it was just that its stackless architecture caused some programmers to throw several kinds of wobbly.
Mike Collins, why so serious. Somebody take Mike down to the corner for an ice cream cone.
When programmers at the MIT Instrumentation Laboratory set out to develop the flight software for the Apollo 11 space program in the mid-1960s, the necessary technology did not exist.
I’m confused by how fixated this is on Apollo 11. There were nine Apollo flights to the moon, plus several in Earth orbit, that would have had this guidance computer aboard. Did they change the code for each mission and this is specifically for 11, or is the author condensing the entire Apollo program into one flight?
Wouldn’t surprise me at all if each of the Apollos had slightly different code tweaks as they learned what was what. In fact, it’d be odd if they weren’t different.
Same basic design but the system evolved along with the missions so no two were exactly alike.
Development and production of the Apollo
guidance, navigation, and control system reflected the overall
speed of the Apollo program. Design of the system began in the
second quarter of 1961, and NASA installed a Block I version in a
spacecraft on September 22, 1965. Release of the original software
(named CORONA) was in January 1966, with the first flight on
August 25, 1966
Less than 3 years after that, designers achieved
the final program objective. Even though fewer than two dozen
spacecraft flew, NASA authorized the building of 75 computers and
138 DSKYs. Fifty-seven of the computers and 102 of the crew
interfaces were of the Block II design. This represents a considerable production for a
special-purpose computer of the type used in Apollo. The need to
quickly build high-quality, high-reliability computers taxed the
abilities of Raytheon.
In the Apollo program, as well as other
space programs with multiple missions, system software and some
subordinate computer programs are only written once, with some
modifications to help integrate new software. However, each
mission generates new operational requirements for software,
necessitating a design that allows for change.
[quote=“Enkita, post:14, topic:81204, full:true”]
My admiration for people who could get a virtual machine running on it and write a program with tens of kilobytes is enormous. Once LSI came along and the standard model of cpu with I/O instructions and separate I/O devices came along, things were much easier to grasp.[/quote]
The use of an interpreter was a very old idea. I have a book describing the IBM (not SOAP) floating-point interpreter that was written for the IBM 650 computer in ~1955.
Wikipedia just describes SOAP as an assembler. The article I read suggested that on Apollo a bytecode-interpreter like VM was needed to fit the programs into available memory, which is rather different. It’s a lot more than just an interpreter. BASIC dates to 1964 and I think it was the first widespread use of a VM.