I am quite familiar with the two’s complement method of subtracting binary numbers. Applying it to decimal arithmetic is a curious thing to do.
Looking at the history of computers, it took a while for two’s complement arithmetic to catch on. There was a good bit of history of the sign bit being used, mostly with decimal but also with binary machines. Heck, I once worked with a self-taught programmer (really an optical sciences grad student) who didn’t know about two’s complement math, and had sign bits in his 8080 code.
1 Like
You can’t subtract by adding. You know how I know? 2 + 2 = 1, that’s how.
This topic was automatically closed after 5 days. New replies are no longer allowed.