But miniature elephants? We know where this leads. First it’s miniature elephants, then unsafe theme parks with live dinosaurs and then you get a T-rex on the loose in San Diego.
Not seeing a down side…
I think eye-projection is currently the only significant way forward from tablets and smartphones. Microsoft et al love to occasionally parade around this slick VFX produced future where everything is some sort of curved touch screen, but they’re still talking about just slapping screens on everything and calling it “future” — by controlling the light close to the source of vision you’re eliminating an entire world of overhead. With that type of technology physical devices almost fade away completely.
Input is still a hurdle of course; I wonder if Magic Leap has any tricks in store beyond voice commands.
Won’t be long before everybody is cock-eyed!
Pay to the order of Mrs. Wilbur Stark, one dollar and nine cents! Pay to the order of Iron Balls McGinty, one dollar and nine cents!
after playing around with a OR for a bit I can see that this technology (if they can pull it off) would be vastly superior, assuming the glasses can preform well in brightly lit places. I love the OR and think it is completely awesome, but it can be a pain if multiple people use the unit. We’ve been playing around with scanning peoples heads in 3D to get more precise measurements for calibration files, but that would have to be preformed for everyone who wanted to use the system, and it would still need to be adjusted manually for the perfect viewing location.
the one problem I could perceive with the Magic Leap technology is freaking the fuck out when virtual stuff pops up in peripheral vision. Like imagine an advertising company that needed to push a horror movie…
I haven’t quite understood, but I assume that they will need to use some kind of structured-light or orther 3D imaging in order to get a sense of the space in front of the user? (If only because otherwise how would they know where to put that elephant?)
In that case, it seems that gesture recognition would be pretty simple.
And if you combine the two, it wouldn’t be hard to have arbitrary input screens appear in the air or the table in front of you as needed.
If it requires you to wear something stupid looking on your face when you’re out in public, it’s a nice toy, but no replacement for current touchscreen technology.
Wondering how long it will be before people start conspiracy theories about how the light being shined into our eyes “isn’t natural” and how our eyes were never meant to experience light directly.
Well, it didn’t take too long for us to get used to holding our faces up to stupid looking things while we’re out in public. We’re really good at adjusting our opinion of what looks stupid.
I’m wondering how they subtract incoming natural light to obscure the things behind the virtual images.
That part sets off my BS detector.
It worked out fine in Star Trek.
Yelling at a tiny elephant only you can see, whilst gesticulating wildly is gonna change peoples’ bevaviour in public spaces a whole lot more than asking Siri questions while you’re on the bus.
“What? No, I’m not fucking crazy! I’m just trying to get this tiny elephant off my hand so I can send a text! Look, I shake and shake and he’s just stuck there like a damn barnacle! Why would I be the crazy one??? Clearly it’s this tiny fucking elephant who’s crazy!”
Maybe my IT security classes are starting to come back to me, because what strikes me are the potential risks of a device that shines light directly into your eyes, that’s controlled by a networked mobile computing device.
Glyph. Same product - less hype. And backing (at this stage).
Been waiting about 20 years for a non-monochromatic version of these to hit the market.