Thank you for the kind elucidation (and you are right, I didn’t read the TFA).
I was talking about diseases actually caused by our post-industrial environment, rather than problems truly associated with senescence, so my post’s more than a bit orthogonal to Stross’s, I guess.
Is it actually a serious problem? Isn’t the associative indexing fast enough for this?
What about the more hardcore approaches, e.g. consciousness uploading? Would a different hardware platform than the lousy squishy bio wetware alleviate this issue?
Nice article. However, consider that conventional general-purpose silicon was used. (Also, worth mentioning that the 1 second took also 40 minutes and ate a petabyte of system memory.)
With task-specific chips this may be less dramatic. Think about an architecture like FPGA, but optimized for neural simulations. I think there are already some chips vaguely like that out there.
Then add some nanotech breakthroughs, namely self-assembly of mass-produced parts into 3d structures. Look at viruses for inspiration; cells produce components that self-assemble into the virions. This, and computing approaches that can run on partially faulty hardware, as the self-assembly process will not be without errors.
It will take a while and likely won’t run on lithography-made silicon. Conventional life-extension technologies are likely to be necessary for successful waiting for the new platform.
According to wikipedia we’re at 14 nm now, with outlook to 5 nm (read this, mentions the successor technologies) in 2020. THAT (some say the 1 nm node projected to 2028) will be the end of the Moore’s law. At least in context of conventional lithography.
I expect different technologies to replace planar lithography. Possibly hybrid ones (self-assembly on lithography-prepared substrate, for example, leveraging higher-complexity nanoparticles or other kind of molecular electronics). And fault-tolerant computing, to deal with buggy structures that are unique. Possibly a gate array version of bad block mapping used in flash memories to increase yields. And algorithms tolerant to errors. Which, in addition, will increase radiation-hardness of the resulting structures. I see an overlap here with handling of structure-induced errors and radiation-induced single-event upsets.
There’s a lot of space in the third dimension. Things will go there, fairly inevitably, once the easy-to-go 2D approach will hit the hard limits of physics.
Not to mention there are a lot of transgender women out there who are unable to afford sugery, especially the usually-low-income sex workers, since that job might be SOME people’s first choice, but many others’ last option, given with how degradingly that industry tends to treat all it’s workers, but particularly trans people.