(Pictures and links to be added shortly! Mouse battery just died)
So, let’s hit some details because there are a couple of really exciting things that aren’t immediately obvious unless they’re placed in context . . . but when they are it’s positively droolworthy.
First . . . the vision stuff
I’m going to skip over the two big kids on the block here (Google Glasses and Oculus Rift) because they’re really well known and I think this crowd already knows them pretty well, I’ll hit on two less talked about creations. eSight and the most amazing welder’s helmet EVER.
We’ll start with eSight because it’s nice and simple, but it’s nice as an example because it gives us a slightly different context. These guys make glasses for the visually impaired that aren’t glasses. Instead, they’re just little LCD displays that display a pre-focused image right in front of a person’s eyes. They’re not fancy translucent things or bulky headwear. . . they’re nice and simple.
Then we have this . . .
Now, ignore the hugely big bulkiness. This is a helmet for welding, it’s mostly big so people’s faces don’t get all melted. . . Instead let’s look at what they’ve done, because what they’ve done is positively amazing and I think you guys will see some of the rest of the potential now that the seed is planted.
When you’re welding the light is so bright that there’s no camera in the world that can gather a good image. Everything gets drowned out and you lose all the detail. They needed a way to augment that . . . to somehow get information from multiple cameras in realtime and to mesh them together in a single image.
So they did.
Let that percolate a bit. We’re humans and think in terms of our two eyes. That’s where our vision comes from, but we don’t need an image that just comes from two cameras any more, and they don’t need to be right where our eyes are anymore.
Instead, every camera and sensor can be meshed together in different ways to provide us with images that are far beyond what we can do with our teensey little eyeballs. We see down into the infrared, up into the ultraviolet. . . Heck, we can see whatever we can sense, from magnetic fields to things around corners to . . . I seriously have no idea where this stops. There’s a scary amount of potential.
And we can supplement that in other ways, giving us information in ways that help us in ways that nothing else ever has. I don’t think I even have to get fancy, I just have to show you what these guys have already done!
Right? I mean . . . wow.
I think that plants the right seeds for our eyes, just think distributed. The devices that sense, that process, and that deliver the images to our eyes can be all over the place and we’d just need to combine that into an image that’s useful from our local perspective.
So let’s talk about touch.
This bugs me.
We humans are tactile creatures. Go try to touch the same point in space twice in a row. You can’t. We SUCK at that sort of thing. We can barely manage with our fancy super-slick cell phones because that’s not what we’re designed for. Now try to touch a point on a surface. We’re AMAZING at that. We can guide ourselves to the tiniest point just based on some slightly different tactile sensation. It’s what we’re designed for.
Not trying to interact with holograms. That’s a pain in the butt and completely unnecessary.
Haptics aren’t in the press as much . . . There aren’t any awesome videos of old ladies having the times of their lives in haptic suits or demolishing the competition with the Falcon . . . So for this we’re going to have to use a little more imagination.
We already have the ability to simulate touch with the power of electricity and we’re pretty good at it already, so let’s start with a glove . . . one where (at least) the fingertips use that alone or in combination with lots of tiny pins to give the sensations of pressure and various textures.
Now, add tiny little servos at each joint . . .sometimes to push against your finger but more often just to increase the resistance . . . soon you can feel yourself touching something that’s not there.
I’ve had the pleasure of playing with a few prototypes a few years back. The illusion is VERY convincing . . . just so you don’t move your wrist. . . that totally ruins it.
But why stop there?
Connect the wrist to the elbow. Connect the elbow to the shoulder. Connect the shoulders across the back.
You still have the feeling of texture, but now you have something else. You can essentially feel anything that you can grasp between your hands. A ball, a gun, a puppy, a 3-D miniature, a book. You can squeeze and have whatever you have respond normally (don’t squeeze the puppy too much). You can toss the ball back and forth, shoot the gun, pet the puppy, move the miniature on the table, and fold the corner of the book you were reading and put it away.
Connect down the spine to the waist, down the legs to your feet.
Now your sensation of mass is limited by the strength of your servos.
Did I mention that you can recapture power since you’re using resistance? This thing would be pretty much self powering. . . I actually forgot to mention that because it seemed like an afterthought among all the other coolness.
Sure, we’d probably be starting with something a bit bulky at first with the really fun stuff, but that’s not all bad, is it?
Now. . . I love fun as much as the next geek, but let’s get practical and use this to save lives and completely rewrite entire industries.
Think back to the augmented reality welders’ helmet and the little seed I planted earlier about that surgeon. That’s where this gets really awesome (double-awesome!). Sure, there’s a lot of focus on using exoskeletons to enhance strength . . . But the real magic happens when we use it to enhance our precision. The same code that draws that precise line for the welder also can help the augments hold the surgeon’s scalpel in place in ways he never could.
I could go on for hours . . . Days even . . . But honestly I’m afraid of priming the pump too much. I can’t fully grasp the potential because there’s so much of it. Who here predicted cell phones would turn into tiny portable computers that can hold a good bit of Wikipedia, tell us how to get almost anywhere and how to avoid traffic on the way, and are more common in Africa than lighted streets?
With a tiny shift in our efforts we could ALL have this in a few years. Not fifty or twenty, but maybe four.
So here’s my question . . . What are we waiting for? How do we make this happen and get it into the right hands?