Our Generation Ships Will Sink

It’s odd to me that people have such a hard time imagining what it would be like to find out more about the universe, without also imagining on top of that, that humans are out there, doing the manifest destiny thing and remaking these worlds into our own image. KSR isnt telling us we’re doomed to remain ignorant of other stars, just that we’ll be unlikely to own any of them.

It’s not hard for me to imagine the human race doing to the solar system what we’ve been doing to our planet. But it’s impossible for me to imagine us earning our way to another star system without growing the fuck up first as a species.

He doesnt even mention the reason I think we’re stuck inside the gravity well for the time being: we lack the necessary attention span. 60years after we started playing with radioactives in a big way, we still havent gotten any closer to figuring out what do do with the stuff once we’re done playing with it. That kind of negligence doesn’t bode well for would-be starship designers.

Finally, it bugs the hell outta me that the Apollo program is so completely misunderstood even today. It wasn’t to explore, and it wasn’t to embarrass the ruskies, thogh both of those are happy side effects. It was to give humanity something harder and more interesting to do, than simply putting nukes in orbit. When would-be space colonists forget this historical imperitive, they distort the reality they think will propel them into the future. There’s simply no compelling reason to do such dangerous, expensive work, and barring a SETI jackpot, none on the horizon.

3 Likes

#I LOVE U MAN 

1 Like

You wouldn’t. You’d tell a makerbot to print a working tooth:

He’s right. Completely.

So refreshing.

The problem is, why bother? If they aren’t going to be humans in space colonies, with “human” being what we are rather than creatures modified to better suit it, what makes that preferable to just robots? Presumably, the attraction of space colonization (which I’ve never been that excited about myself) is that humanity will continue after we’ve wrecked Earth. But if what continues is only loosely human, that incentive is lost.

I’m sure some good sci-fi is waiting to be written of an ancient civilization that seeded their galaxy with self-replicating machines, all busy terraforming planets and moons and mining ore, and yet the aliens themselves never managed to solve the problem of long-distance meat travel, and their civilization collapsed before any of them got out of their solar system…

2 Likes

+1.

In theory someone could come up with a completely new approach to intelligence, the entirety of which (instructions and working memory) could fit on a 64kB memory stick, but as far as we know now that ain’t possible.

Also, in theory we could all carry around portable wormholes and teleport to whatever planet we wanted, but the point of OP’s post seemed to be that we should ground ourselves in reality.

1 Like

Same as ever, “humanity” is what you make of it.

By similar logic, Australopithocenes might have well have killed themselves off, since humanity would eventually not be them.

1 Like

There’s Saberhagen’s “Berserker” series about a killer robot civilization that wipes out organic life wherever it finds it, mining planets to make more ships. Basically Skynet in space (but Saberhagen started writing the series decades before “The Terminator”).

Not really, no. Analog signals have infinite resolution, which is what makes them analog. Translating them as a system of discrete points can be done, but you always lose data about the relationships of the signals, so you need to hope that you understand cognition itself well enough that it is not in the in-between bits. Also, arbitrary precision is only theoretical. Just like how and sound can be decomposed and resynthesized by an infinite number of sinusoids. You can only say what is “good enough” when you understand what is being simulated. Otherwise, it is simple economics of fudging the data because digital is cheaper and seems more convenient.

It probably seems counterintuitive to those who have not engineered both, but digital signals are a special case of analog, and not vice-versa. Why digital appears to be an ideal black-box is because the larger analog problems of real materials and physics have been hidden from the user, and even most developers.

No, that’s not what I was saying. Even single neurons physically reprogram themselves. And more fundamentally, there has been a bit of modelling of neurons, but no direct mapping of how their activity might relate to what might be called intelligence. There isn’t even any evidence to suggest that creating a brain out of actual biological neurons will result in intelligence.

That is pretty much what I was getting at. Everything which has actually been successful used other techniques.

I do, because emulation imposes a significant overhead. It’s not like emulating a Z80 on your i7, where we are dealing with something similar, but simpler. The more dissimilar what is being emulated, the greater the computational overhead will likely be.

And again, this all seems to ignore more basic questions. What exactly is “general human-level intelligence”? Would you pay for a computer which basically thought like the average person? If it is so similar, why go to such lengths to copy what we already have basically for free? Since hardly anybody values human intelligence in actual humans, why would they suddenly value it in a computer? I see the value of powerful AI as being basically the opposite, that we can get cognition which is explicitly not human - that is its value.

Well, leaving aside that smarts and experience defy easy quantification, any complex digital process could be run on less powerful hardware, it would simply take much longer. If AI runs at nanosecond speeds, interacting with humans may seem more to it like events in a geological timespan. So being slowed so much as to run as an eight-bit process would not hinder it. But the lack of addressing would. But this is assuming that an AI would run as a fairly monolithic algorithm on a general-purpose computer. Clustering 8-bit processors more as networked neurons or ganglia would offset the addressing burden.

2 Likes

There are at least a few books with that exact scenario.

Then there is David Brin’s “Lungfish” where we are the last of the aliens to evolve, all the meat ones have died, and our system is filled with their robot progeny.

2 Likes

Did you notice that I said, “up to arbitrary precision”?
The infinite resolution of analog signals becomes quite irrelevant below the level of noise. These cannot be relevant for “cognition itself”. I agree that the more we know about how the higher-level processes work, the better we can get at leaving out “irrelevant detail” at the lower level. But we can come up with a solid estimate of an upper bound of the accuracy required. Thus digital simulation is possible.

So what? Neurons are complicated. Simulating a neuron that reprograms itself is not impossible. Building an analog computer using vacuum tubes that simulates self-reprogramming neurons is harder, but digital software is actually pretty malleable.

There is no direct mapping of how the activity of transistors relates to LOLcats, either. Are you saying a holistic understanding of how exactly a brain works is required before neurons can be simulated by a digital computer?

That is true. We’re only talking about a few orders of magnitude here, though.

Yeah, no. Many complex digital processes have lower bounds on their storage requirements. It’s not about addressing. A C64 connected to a tape recorder with an extra-long tape would, of course, manage. Which would return me to the other point I’ve made about three times already: It wouldn’t be practical. You need to be able to actually perform experiments if you want to make progress on understanding something very complex.
Also, as far as defying quantification is concerned: true, but irrelevant. We can establish bounds. We can estimate how much total input (uncompressed) a human brain gets. That’s an upper bound for “experience”, measured in bits. As far as the lower bound goes, that’s hard, because it relates to the Kolmogorov complexity. If the intelligence in question is a Shakespearean actor and can recite Hamlet, that is not a great feat of intelligence, but I’m willing to bet it requires more than 64KB of “brain complexity”.

I’m not ignoring those basic questions. I just haven’t made a statement about them. In fact, I only stepped in to defend one specific statement made in the course of this discussion, namely that we now (plus or minus a few years) have sufficient computing power for experimenting with computation on the same scale of complexity as a human mind, and that that might be relevant to the speed of progress made in research on this subject.

Of course narrow AI is useful, so it’s not surprising that that’s where the money is. Human-Level Artificial General Intelligence is interesting because if we manage that, it is very likely to be not very far away from Transhuman Artificial General Intelligence (which is something that we may or may not want to create). Human Brain Emulation is interesting as a means of understanding human cognition, and in the very long term because it potentially allows immortality and maybe even interstellar travel.

True. It must be said, however, that it is still entirely unclear whether those successes are steps towards “artificial general intelligence” or just solutions to much more immediately useful “narrow AI” problems.

I’ll state one more thesis about narrow vs. general AI that I “weakly believe” in (It’s my current best guess, but I would be only mildly surprised if Google proved me wrong within the next ten years):
Accurate translation from one human language to another is impossible without general intelligence. Or in other words, Google Translate won’t work well before it turns into Skynet.

If false:
Return true

1 Like

Presumably, but “noise” is merely a term for parts of a signal which seem to have no meaningful data. But it can represent data about the physical properties of the system, such as thermal noise. When trying to reproduce an elusive, emergent property such as intelligence or consciousness, where its specific causes are not known, even apparent noise needs to be parsed and considered.

If we were predominantly concerned with high-level effects, why bother simulating cells? It is easy to say that we can know what is relevant and irrelevant in these domains, but this depends upon what exactly one is trying to implement. It makes sense when positing a certain type of behavior or adaptation, but I am not convinced that there are general-purpose metrics here.

Perhaps, depending upon what are considered to be the most salient details, and what degree of simplification is acceptable. It still doesn’t answer the “why” of hoping to convert the process from fluid digits, to binary digits.

It’s still less efficient to play with a cheap approximation rather than make models which work the same way. Are neurons hardware, or software? If you think that it’s folly to make binary self-programming systems with circa 1980 technology (Commodore 64), why would you propose analog self-programming systems use circa 1950 technology (vacuum tubes)?

There isn’t?

No - I am saying that there is hardly even a clear indication how/if/what the brain has to do with intelligence. And that I am not convinced that there is any point to simulating biological neurons to yield new information systems. There might be, but I have no reason to bet on it.

No, orders of magnitude relate to comparisons which fundamentally “more of the same”, not to systems which are fundamentally dissimilar. The only clear explanation I have ever gotten why human cognition should be understood as a form of programmable binary logic amounts to the simple reduction that these are the tools people want to use. But this has never been how science has worked. You understand phenomena by creating tools which are best suited to them, not by crowbaring your favorite tool into the task as a base assumption.

Storage is precisely an addressing problem. It’s why you can’t simply wire your C64 or other 8-bit processor to a 4GB stick of RAM. Storage is practically infinite - but only if you can access it in some meaningful way.

Brains don’t use discrete storage, so they have no input. Brains have stimulus, and what they store is a self-devised pattern which is their impression of the stimulus - literally, an analogy. They are not written to nor read directly like a recording device. Any apparent input is a mock-up composed from existing models. All experience is bootstrapped from other experience. Storage can still be offloaded externally, but this is subject to the same abstraction process as the original stimulus.

For distinctions of narrow versus general AI to be meaningful, it might beg the question of how much consensus there is between standards of narrow versus general natural intelligence. Or even definition of intelligence generally. Your examples here all use human cognition as a benchmark for intelligence, with no justification. Measuring intelligence against human values of intelligence merely turns it into an expensive vanity project. There is no reason to assume that an intelligence needs to resemble humanity to be immortal nor to travel long distances.

Human languages are not efficient ways for an AI to communicate! And Skynet is another daft example of people attributing animal instincts to computers which don’t need them. I consider the perspective of assuming computer intelligence should resemble that of humans to be about as absurd as the very real push to make robots in human form. It is a waste of time and money which yields something not as good as either. Since domestic robots don’t climb trees, there is no pressing reason for them to have ape-like bodies. Even making them in a bigger, more impressive interpretation of the human form still completely disregards efficient design from first principles. Yet, it remains a hugely popular outlook, despite a nearly complete lack of self-awareness about the arbitrary nature of such design engineering goals.

2 Likes

SEXBOT SASQUATCHES

furry love, FTW!

2 Likes

ETA: upped a local copy

1 Like

Ambient influences on the brain introduce noise. When you know that a person behaves the same when there is a magnetic field in the room, then you can ignore anything that has a smaller magnitude than the physical effects of that magnetic field on brain cells if you want to understand cognition. And that is only one example.

Because one way of figuring out those processes is by rebuilding them from basic building blocks. Do you have a better way in mind, or are you just arguing that nothing can possibly be done?

I didn’t. I accused you of doing so. Simulating a self-programming system is hard using 1950s technology. It’s not particularly hard as a digital simulation. When engineers try to figure out how to best build an airplane wing, they replace the real wing by a digital simulation. You have provided no argument why a neuron should be different.

No. In much the same way as there isn’t between individual neurons and intelligence. You somehow seem to demand that this mapping be known for neurons before any other statements are made, and I don’t think anyone will ever be able to provide a “direct mapping” that satisfies you.

What? Now you’re just driving trollies. What, pray, do you think is the evolutionary purpose of this strange organ that uses up about a quarter of the human body’s energy expenditure?

Digital computation is a universally useful tool for simulating stuff. That applies to completely, fundamentally dissimilar things such as airplane wings and light rays in computer graphics, and there is no reason why it shouldn’t apply to neurons as well. We currently have no other comparable tool at our disposal. If you want to know how a network built from a million neurons wired in this and that way according to some theory about how stuff might work at the low level will react, it is simply not feasible to do it any other way.

No. I can’t hook up a 4GB stick of RAM to a processor with a 16-bit address bus, but it would be possible to hook up a C64 to the internet and “address” the whole world. Storage is just limited by how much storage you can buy, given present-day manufacturing costs.

Splitting hairs again, are we? If you care only about the borderline cases, there is probably no consensus at all. Which is not relevant to the use of the terms to clearly categorize most of the things we can imagine, and 100% of anything that already exists.

Justification: I am human. I can think. I want to know how that works and how to replicate that property.
You seem to be using a destructive dual strategy of (a) discounting anything that is linked to human experience as arbitrary and narcissistic and (b) discounting anything that is not linked to human experience as unknowable by humans. Pretty convenient, eh?

I don’t care about those immortal star-travelling other intelligences. I was saying that emulation of human brains, if the very hard problem of data acquisition (“brain scanning”) is ever solved, can lead to immortality and star travel for humans. See, it’s all about the humans. Because, unless you haven’t noticed, most of us self-identify as human here.

Completely besides the point. Human languages are a problem. A problem that can be solved by humans. A problem that currently cannot be solved by machines. A “human-level general intelligence” can by definition solve that problem because humans can, and if it couldn’t, it wouldn’t be a “human-level intelligence”. I was making a statement about whether a machine could solve that tasks without being able to solve all the other mental tasks that humans are capable of.

Skynet is from a SciFi/Horror movie. Couldn’t you tell I used that reference jokingly?

#Is there any escape from noise?

8 Likes
1 Like