This seems appropriate: https://youtu.be/UIKGV2cTgqA?t=149
Computers (modern ones) are Base 2 internally. To avoid writing strings of 0 and 1 these were written as Base 8 and then Base 16 (one base 16 digit is 4 base 2 digits). Nowadays hardly anyone but embedded and OS programmers need anything but decimal except rarely, but once upon a time I could think in Base 16 as fast as Base 10. And yes, the horse is using Base 4.
Java is consistent. Then you get people complaining that in GregorianCalendar months are numbered 0-11, which is consistent, but days are numbered 1…
this may be the best Dunwoody twist ever. he’s not an idiot time traveler after all!
Mr. Bug must be out of form: the horse joke was actually funny TO NERDS
I learnt binary and Boolean logic at a grammar school in England in the early 1960s.
This and a grade 1 O level in maths got me my first summer job on an ICL 1900 mainframe at the age of 15. However, other things intervened and I did not go back to computing till I was 30.
In those days there was a kind of hazing ritual for new employees. On the first morning I was sent out to find where all the tapes were kept and make a record. After a few hours of increasingly frantic searching I discovered that there were no tapes with numbers containing the digits 8 or 9, and none between 77 and 100.
So that was how I learnt that everything was numbered in octal…and the location of every storage point in the building.
Check out the history of how perl has handled dates if you want to see similar madness. I think at this point it’s all smoothed out, though - you call a DateTime module from CPAN and pretend the native perl time and date functions never existed.
But I do a lot of sysadmin work, as well as large-scale dissimilar systems integration and architecture, so naturally I hate Java. Especially client-side Java, which is a sysadmin nightmare of unrivaled proportions. Everyone involved should be ashamed of themselves.
I am also annoyed that C has no type-independent exponentiation operator, and that awk uses an invisible string concatenator, and so forth - I can probably find a reason to hate every language, honestly!
Back on topic, I like the way the Middle-Aged-Couple-in-Armchairs never spill their tea.
the babylonians used base 60, as do we – seconds, circles.
and, arguably, american carpentry is base 12.
Just like base 10 does not contain the number A.
Basically, it was (and remains) way simpler to distinguish between two distinct voltage levels on a wire, rather than 10. The voltages need to be reasonably spaced apart from each other so there’s no chance of confusion, so the more possibilities you have the higher the highest voltage has to be, which means more power draw and more heat and more distance between the wires to avoid interference and it’s just much more efficient to trim everything down to “off” and “on.”
(I am not an electrician and I may have gotten some details wrong. It might be amps or ohms or something instead of volts. Whatever.)
I thought @Medievalist should be “Der heretic”. Am I wrong?
It is a bit more complicated than that.
Internally to some devices, like MLC flash, values may be stored with more than 2 levels for compactness.
Early computer circuitry using transistors couldn’t reliably be in more than 2 states (off and on), but this was not seen as a problem because boolean logic only has two states, true and false. Internally to a computer there is no fundamental operation of addition; everything is boolean logic which resolves to the fundamental operations of negation (true->false and vice versa) and either logical AND or logical OR. In other words, the fundamental problem space maps perfectly onto base-2 because each digit can be represented as being either true or false. Any multilevel system of logic must at some point be resolved into boolean logic to let the fundamental operations happen.
With reliable modern fabrication processes, the disadvantage of two state logic - that it was not very power efficient in the early days - is overcome by having the lowest operation voltages that are reliable. So while in the early days computer circuits operated at between 5 and 15 volts, nowadays the difference between a zero and a 1 may be only around 0.3 volts. That, and the tiny transistors used, explains how we can get so much processing power into a mobile phone.
A horse that can count up to 4 is an example of a very power-inefficient logic circuit.
On the other hand, adding numbers in base 1 is the easiest of any number system.
1 + 1 = 11
11+11 = 1111
and so on.
@kupfernigk, @ChuckV - perhaps this is another example of my innumeracy, but, just as base 10’s unique digits are 0…9, isn’t base 1 just 0 - or do we assume there’s no zero, and thus only one? Having only one unique digit leads to some problematic representations.
The definition below (of
radix) implies, but is not completely clear, that it includes 0.
If there is 1 unique number, and 0 is included, nothing else (ie, “1”) can be included.
Of course …
Base 1 would be a unary number system. In theory, any symbol could be the lone symbol of such a system. In practice, unary number systems are only used for tallying, and the symbol is a simple stroke, which may not exactly be a “1” but is close enough by lay standards. (Meaning that mathematicians and logicians may not agree.)
ETA: Ah, looking at your linked article. I see that unary is acknowledged and noted as a special case, but not really explained clearly. It looks to me like unary is not a positional numeral system, despite what the article implies. A PNS should have a 1’s place, 10’s place, etc., or whatever places are appropriate for the PNS’ base. But in unary, all “places” are valued equally.
ooops, and I didn’t scan too far down the article.
aaaaand, re-reading @kupfernigk’s equations for a 4th (IIIIth) or 5th (Vth) time, I see there isn’t a positional system in place, unless we saw that the 5th hash in
11 + 111 :=
counts. But it still isn’t really “positional” so much as an agreed-upon format for organizing the 5th hash. It is present, and
Speaking of RomanNumerals, I understand that the subtractive-notation method is more commonly modern, but still existed in “Roman times”, along with double-subtractive methods 1.
Your excellent reply beat me to it. There is little to add other than perhaps a bit of context.
In Peano’s original work on the mathematical philosophy underlying natural numbers, he was careful to avoid the idea of “unity”, defining the idea of a sequence such that the interval between each element of the sequence was not capable of being subdivided.
Zero is not actually a number; it is the absence of number. Thus the number 100 means one hundred, no tens and no units. Using zero as a placeholder for powers of the base is a convenience construct only. If we worked always on a checkerboard and had a mark for “right hand end of number”, we would not need zero. As you say, in unary there is a symbol for “something there”; the absence of number is an uncut tally stick or a cord with no knots, and each cut or knot adds one “successor” to the number.
Perhaps the most bottom-backward and infuriating version of subtractive notation is the telling of time using the “twenty to three” or “quarter to four” method.
The Russians go several awful stages further on this, so that ten minutes past three becomes “ten minutes of the fourth [hour]”, i.e. at ten past three we are looking forward to the next hour.
When it comes to number systems, time and dates we seem as a species to have a built in desire to obfuscate and overcomplicate. I do wonder what would have happened had the strict Orthodox among the Israelis invented the stored program digital computer; progress would have been held up for years as they tried to work out how to incorporate intercalary months and a start of the day that begins at sunset.
…and base 64. I think it was Whitworth who complained of engineers using expressions like “a full 32nd” or “a bare 64th”.
I’ve noticed that missing out the ‘past’ bit when telling time (e.g. half four for 4:30) is always fun for confusing statesiders.