Not a problem: any COBOL programmer worth their salt knows how to siphon off a fraction of a penny from each transaction and route it to their private off-shore account.
IIRC I once told that joke to a friend who works in banking and he said that rounding losses are no joke and more than one person has tried to hide theft in that way.
Is this real? I remember people saying that nobody knew COBOL any longer and banks were running on unmaintained code back in the seventies and eighties. It has banks and the hint of large sums of money. All it needs in a Nigerian prince.
In the seventies, the whole job was probably not written in COBOL. The bulk of the code might be COBOL but all the optimised bits were probably hand written in the local machine code. This is certainly how we did image processing on the PDP-11. So, as other people have hinted, it is probably a lot more than just picking up a COBOL manual, and having a go.
And yet the original system must have been written from scratch. It has probably been modified, but we have the old system, so we know exactly what’s wanted. We could probably write a new system from scratch. If this sounds hard, I have written software versions of hardware that people had forgotten how it worked. In the particular case I am remembering, the hardware version had one number that got bit reversed because the ribbon cable that took it to the next board should have had a half-twist in it, and nobody had ever known.
My challenge is: say exactly what the job is so we can write a modern version, or this is an urban myth.
Well yeah probably whats needed is people who’s skills go beyond banging out node.js code in their preferred IDE.
As an aside, did you hit the image size limit on the PDP 11? Our experience was in RSX11M on the 11/84 and 11/83. We used it for traffic signals and there was business logic (coded in 12 bit machine code) in the device drivers because there was nowhere else to put it.
Back in the day every job was hardware/software because everything had to fit into the hardware. Now its basically fitting the innermost layer of abstraction.
Oh gods, yes. All the time. The program and the data memory came in 4KB chunks. it was hard to go beyond 16KB (I dimly remember you could as the superuser). You could get about 5 lines of a 1K RGB image. We used to interpolate colour using 16x16x16 cubes (4K * 3 channels). All we needed (we said, while laughing) was enough memory, and we could just look up the RGB values in 24-bit table, right?
About ten years later, I remembered this. 24 bits is 16 million addresses. For RGB as bytes, this would be 48 MB. It wasn’t even hard on a 32-bit system. Okay, it was still a silly solution because you still had to calculate these 16 million values and most images weren’t that big anyway. But it came as a shock that the absurd idea had become easily possible without us noticing.
Now images are that big, and we have to do 120 frames per second. But we still don’t do it that way. Happy days.
Sure, but then you still have to transition over to git, which includes training people, and migrating the code and all the deployment pipelines. All that takes time and is not for free, and probably no on the top of any manager’s list of fires to put out.
What do you mean by that? Maybe I totally get you wrong, but the set of automated tests that prove the system is still working as intended have to be developed specifically for that system. And and when writing those tests afterwards, you’ll find that the architecture and the design of the system makes that hard or impossible without changing the system itself. So that can’t really be outsourced to other companies, it leads to adopting a new paradigm for software development.
But companies of course still try to outsource that. And fail.
One important measurement for fucked-upness achitecture is how much time it takes to add a new feature or fix an existing problem. That’s what I’m talking about.
Where would you see “hundreds of thousands” of concurrent users? What people do in their online banking interfaces are not real-time transactions, that’s mostly sent over to the mainframes and processed in batches. I wouldn’t know how all that real-time trading works, though.
so they don’t see any reason to buy a new one
Or, in the case of high-risk industrial and power installations, can’t
- Spend money getting a new machine audited and approved.
- Risk a machine built with components that are less hardened to radiation or environmental concerns (TTL components and magnetic core memory vs CMOS and DRAM? really? Get of my EMP-hardened lawn, kid…).
- Risk a machine that can’t be fixed without custom silicon.
- Be bothered to change system running space-shuttle complexity industrial machines on 1960’s,70’s tech. It works just fine, thank you (Varian’s update of the V72 “fixing” some of the memory access concerns seems to be causing more grief than it’s worth).
That first concern is a big one. I’m aware of an early 1970’s era installation that wrote an emulator on its Sperry-UNIVAC 11xx series control computer for the previous generation of control computer from the early 60’s just to avoid having to recommission the original software. That software was only a few hundred K of machine language.
rounding losses are no joke
Yup, had a transaction kick out of a system once the first time I saw it. Around $300k for the month’s rounding error in one department.
(a VMS-based system written in Ada , no less)
I was talking more about banks, as a big part of the current mainframe market. Industrial applications mostly don’t use mainframes these days, so if they were going to replace it, it probably wouldn’t be with a new mainframe anyway. Industrial machine control is done mostly with pretty close to consumer hardware (for the chips, anyway ) , computation heavy stuff is done with supercomputer clusters, not mainframes, and rad-hardened stuff is done with custom GaAs chips. Financial institutions’ use case is one of the only ones that still requires an actual mainframe.
That can go spectacularly wrong too.
I had no idea Ada was used in banking. Its used everywhere in Aerospace of course.
I’m aware of an early 1970’s era installation that wrote an emulator on its Sperry-UNIVAC 11xx series control computer for the previous generation of control computer from the early 60’s just to avoid having to recommission the original software.
Yeah the traffic signal system I worked on ran on PDP 11/84 and 11/83 systems. We replaced it initially with a PDP 11 emulator card plugged into commodity personal computers running MSDOS.
It was just getting too hard to keep the original hardware working. This was in the late 1990s.
I had no idea Ada was used in banking. Its used everywhere in Aerospace of course.
It’s rare. In that case a global, multi-site, half-million line system could be maintained and extended by a small team of 6 developers because half of us were ex-aerospace. Ada was a good choice, actually, it imposes a certain discipline.
COBOL? My grandpa’s still getting frantic requests for FORTRAN I punch cards & paper tape.
what the heck is that giant wheel? ( and, more importantly, how can i justify installing one at my desk? )
ugh. sad programming related news. though not widely reported anywhere yet, it appears john conway ( inventor of the game of life ) passed away from covid
Someone had a sense of humor Photoshopping a DECwriter into that picture.
But no short pants.
Thank you for posting that @gatto. I had no idea. I love Conways Game of Life, and have coded it up many times. This is really sad. I suppose in a way, being killed by an aggressive new virus might be slightly apropos.
And therein lies the problem with COBOL: human languages, especially English, are filled with ambiguities, oxymorons, contradictions, verbosities, and illogical rules. That’s why computer languages are designed to be the opposite.