Giving up on saving the world

I think that’s really quite unkind. I do not believe he is pleased about the outcome, or the “occasion for authenticity”… but rather is pleased about the serenity that comes with acceptance.

I’d say you begrudge him. And that’s good. Its an uncomfortable reality that he has a very good handle on. We -are- screwed. That said, I believe it is a feature, not a bug.

1 Like

One of the depressing things about the future is that if technology doesn’t provide the way out, by that point it will have removed a lot of the other options. For example, many parts of the world are going through groundwater and topsoil far faster than they can be replaced. At this rate, many important sources will be effectively exhausted within decades. If the land stops producing enough food or there’s not enough water for a region, that will cause knock on effects as people leave for other areas. People who live in big cities like New Delhi won’t be able to grow their own food or get their own water if supplies fail, and as long as there’s a deficit on this scale along with economic growth that will put more pressure on the water levels, it’s hard not to predict a time when there will no longer be sufficient resources to supply the population.

It’s certainly possible to imagine a better allocation of resources that may prevent disaster in time, but it probably isn’t going to happen enough, and certainly not everywhere. If there’s no technological miracle in time and a city is left without an affordable water supply, you can’t even go back to pre-industrial methods to survive - you have to find somewhere new to live (which probably won’t have an abundance of resources itself). What happens when millions of people in coastal cities have to leave because of the rising sea levels? By the time it really affects people, it will be far too late to do anything and the scale will be bigger than we can imagine. This has the potential to topple governments and destabilize regions, which will make it even more difficult to manage the problem. Added to this, we probably won’t have the easy access to energy and other resources that we do now, making it even more difficult to make a difference. I’d really like to believe that it will be difficult but we’ll make it through somehow, but I think our odds are not good. Seeing the general political resistance to almost any environmental initiative with teeth and the growing right wing opposition to immigration even with a relatively small amount of pressure, I’m not confident that humanity will reach the level necessary to deal with this crisis. I’m sure humanity will survive, but not as we know it.

1 Like

Precisely the kind of anthropocentric, Chicago-school claptrap that got us into this mess in the first place. And even if you ignore the rest of the planet and concentrate only on the the vermin killing it at an unprecedented rate, I find it incredible that anyone who claims to have their eyes open can see us getting our shit together anytime soon.

People will adapt.

To what, a poisonous wasteland ruled by an all-seeing corporatocracy?

Yeah, great.

3 Likes

Technically, cities can grow their own food. High-efficiency solid-state lighting does quite wonders, and the efficiency of resource use (water, nutrients…) is much higher than with conventional farming. With no degradation of soil as none is used.

http://www.npr.org/blogs/thesalt/2013/05/21/185758529/vertical-pinkhouses-the-future-of-urban-farming

Only energy is needed, which can be supplied with thorium-cycle reactors.

2 Likes

Of course it does, you just need to think outside the box a little.

I give you…the wearable holodeck!

We’re almost there, it’s just not in precisely the direction people have been pointing, and once we have it the vast majority of humanity will have a greatly reduced need for things and access to productivity-improving interfaces I honestly can’t begin to imagine, and I’m pretty creative.

No fancy brain interfaces needed, no real technological advancements at all, just a different way of assembling what we already have and some refinements that are well along paths we’re already following.

It’s one of those ‘obvious in retrospect’ things.

1 Like

Yessssssss!

BTW, do you have any idea about an embedded computer board that is able to process several video streams in realtime? Or is it rather a task for a FPGA board acting as a custom-made USB camera combining data from several image sensor chips into one frame (not THAT difficult, I reckon)?

CCDs work by sending a pulse that “exposes” the cells, then sending sequences of pulses to shift registers that shift each row to the subsequent row (and the end one to the row buffer) and then clock out each pixel from the row buffer, converting the array of pixels to an analog output stream. (Or several streams for parallel readout.) That then goes through ADC (which itself can be more than 8bit and facilitate at least some of the HDR capabilities). A dumb CCD with suitable clock generator then can provide a monochrome analog video without much other electronics. If attached to ADC and other electronics, it can provide image directly to a computer. With a FPGA it should be relatively easy to pack e.g. four simultaneously acquired images of 640x480 to one 1280x960. The two underneath can be even from the same CCD, acquired sequentially with different “shutter speed”; with 12bit ADC in the mix we can get quite a lot of HDR. Combine the two frame halves together, get a 1280x480 image, feed it through OpenGL to apply the barrel distortion to the left and right half, and feed to Oculus Rift. (Resolutions are suggested to be higher for a real application.)

For added pizzazz, combine several image sources to one realtime image - RGB camera, night vision (“infragreen” as I love calling it), thermal imaging; generating false-color image in real time with the best chosen style/mapping for the best situational awareness in the given moment.

For underwater work, overlay video with synthetic imagery from a sonar array, providing a 3d wireframe model of the surrounding environment, allowing work even in muddy water with zero visibility.

The missing cheap off-the-shelf parts are a fast enough embedded computer board (I believe this is not so difficult to find, there will be tradeoffs between power consumption and computing power but that will be solved in the future “on its own”, multichannel video input (likely has to be a FPGA board as if there’s anything off-the-shelf it will be priced out of non-corporate world, and multichannel analog input/output (for the mentioned sonar array), implementable in a similar way on FPGA. The USB-audio and USB-video specs seem to be just perfect for this and the standard compatibility should further simplify the work by not having to port kernel driver to different OSes and architectures because the generic drivers are already there.

Thoughts, comments?

…and the HDR welding helmet is something to WAAAANT!

1 Like

Ooh, clever. I was actually thinking purely commercial quality (it’s not like the idea isn’t investor-friendly) but you’re right in that FPGAs might be an excellent way to stage things while waiting to get enough resources to put together a proper fab.

Obviously people don’t want to be walking around with a bulky Oculus Rift type thing on their heads, but that’s completely unnecessary since we’ve got gasp wires (or fiber optics) and we could accomplish the same thing without being so unweildy, so something small you put in a backpack could easily drive the first couple of steps along the way from a visual standpoint, and that way you could really crank up the Resolution and FPS without having to worry about draining too much power. We’d probably be talking about multiple layers anyway depending on what we’re doing at the time (even if we start without fancy translucent OLEDs and just go the eSight route we’d still be using one channel to provide a picture of what the eyes would be normally seeing and other channels for the HUD/Augmented Reality elements)

When you tie in the full 3-D touch elements (which is really just low-level haptics and technology similar to what we’ve got going into exoskeletons…but backwards (and therefore requiring far less bulk)) then everything suddenly comes together. The whole ‘let’s project things into space and somehow expect people to interact with ghosts’ is a complete dead-end and I’ll be glad when we stop wasting so many resources in that direction.

As a bonus, it’s essentially self-powering. The haptic/touch elements feed off of resistance, which means you’ve got something to capture and convert to energy. We’d want some sort of battery for when somebody just wants to sit back and watch a movie, but even if we never get a supercapacitor we’d still be able to charge up just by interacting with imaginary things (or walking, squeezing a ball, whatever)

And yeah, the helmet is freakin’ brilliant. The whole ‘using multiple cameras to gather data from multiple angles and weaving together an image that no human or camera could EVER see’ is pure genius. When they threw in that HUD/Guide thing that was just piling on the awesome. Total Sploosh.

Nice to see someone else seeing the potential :slight_smile: It doesn’t solve everything, but it sure as heck solves a lot of things and frees up resources/man hours to take care of the remaining food/shelter bits.

I don’t mind wearing odd hardware. Screw the people, I am far out enough of the society that I don’t feel bound by their notions of “normalcy”. Screw them and their opinions. So Oculus Rift it can be. (More likely, CastAR for augmented reality stuff, though. Not sure about full-immersion mediated reality yet.) Lighter hardware, less loading the neck and head, is however preferred. Microdisplays are a good bet for the future here; either miniature DLP projectors (like CastAR does), or LCD-on-silicon (or so); to a degree, the smaller the cheaper to make, the less material used. Optics then can make it “big”.

Not sure about suitability of haptics. If exoskeletons, then I’d go for a full-scale powered one. Hints (piezo, vibration, electric impulses) do the same job as more-force feedback ones for lower cost and complexity.

Power harvesting, from motion and other stuff, is still somewhat immature tech. Promising but so far I’d go for batteries, unless micropower stuff like sensors is needed. Self-powering is something to strive for but as of now I’d not consider it too viable.

Re more cameras and scene reconstruction: one word: telepresence! You can even scale this a lot up or down and work with a big mecha when building a bridge, or with micromanipulators when hacking a chip die, and still with the same VR setup. (Better force feedback can be actually useful here, and the cost/complexity/bulk of a partial exoskeleton makes sense here.)

The potential is impossible to not see! I am waiting for this development since early 90’s. (Actually, I think about augmented reality since I saw the Terminator 1 movie. Scary how fast the time flies!)

2 Likes

This topic was automatically closed after 5 days. New replies are no longer allowed.