The Headset Revolution will be a blizzard of conflicting realities—if it happens, that is

[Permalink]

Empathy? The first adopters will be Youtube trolls.

2 Likes

I assumed gamers.

2 Likes

I’d like to be optimistic about the ability to immerse people in the experiences of others, but we’ve already made a progress in this area, and it continues to be manipulated and fall on deaf ears and blind eyes. How many people currently invest an hour on a regular basis into Frontline or Expose? Potential is so different from reality. How many ignorant people are voluntarily seeking out the myriad currently available means of shedding their lack of knowledge of their fellow humans.

The other concerning thing about technologies that promise to give viewers a realer, more accurate view of things is that the superficial feeling of closeness and realness that these technologies offer have a halo effect of truth. But being “really there!” is not the same thing as knowing what’s really happening, and any illusion that you know what’s really happening makes opens you up to manipulation. Its like close-up magic, where the prestadigitator allows you to be close, in order to give you the illusion that there’s no way he could pull anything on you. And of course, once one suspects they might be manipulated, they are also quite likely to be hardened to the truth when it does appear. Anything can be dismissed as manipulation of conspiracy

It is also a big idea in social sciences that proximity is not the same of true connection, and that the two, though seemingly similar experiences, actually have opposite effects. Ethnic groups who live in proximity to one another, but have no meaningful contact and interaction are often more apt to resent one another, while communities with meaningful contact tend to empathize more often (I know, [citation needed]…) These devices provide proximity, but not connection, not meaningful understanding, not context. The technology is great, but just like the first telegraph didn’t prevent any of the suffering of the last 200 years, VR helmets alone won’t necessarily do so either.

2 Likes

I expect there is some overlap between these sets.

2 Likes

What about the critical theory that works of fiction are proxies for teaching morality and ethics? Watching movies inculturates people with values like love, truth, friendship, an so on (while at the same time being a reflection of culture).

The Rift has the possibility to turn this up to eleven which is equal measures exciting and terrifying. It’s the best and the worst things about books, movies, games, and technology all smashed into your face.

By the way, the bit about the Rift feeling like drugs is kind of true. It is intense. The library I work at picked up a DK2 to demo at our branch and while it is universally praised, it still doesn’t seem to be desired for “casual” use. That is to say, people aren’t asking to sit down and just “play a game” for an hour; they want to do very specific things like go on a spacewalk, ride a roller coaster, and so on. As the mode of interaction gets refined, the potential is going to skyrocket.

INTERNET - 2021:

2 Likes

I played around with VR in the '90s and it had some fatal flaws that prevented it from setting the world on fire back then (such as making people sick), even though it was “the next big thing.” It seems to have most of the same problems now, so I’m somewhat skeptical that things are going to be that much different today.

A big question is the compatibility between the different devices. Will the same software work with Oculus Rift, that Sony thing, and the CastAR too? Or will one have to buy different hardware for different software?

The article says the sickmaking isn’t something people talk about, but it really is, and lots of progress has been made on it. The big barriers in the 90s were part hardware (hardware that could keep up just didn’t exist yet, and even now we’re only on the edge - a good number of computers still can’t render VR well, like the many laptops with throttled frequencies due to the limitations of onboard graphics cards, and that’s a primary cause of making people sick) and part cost (the average person could not hope to afford a 90s VR system).

For example, my mom can’t even watch regular FPS games on a regular TV without getting sick, and even some movies move her into the vomit zone. Yet she was able to handle half an hour of the DK1 before the effects she had to turn it off. The newer version’s hardware is even better at avoiding nausea, assuming you’re using software that supports it.

Right now, assuming you have the right hardware, making people sick primarily seems to be a software problem, and there are a number of do’s and dont’s quickly building up there, though standards take time to be accepted.

On the other hand, I think a lot of its potential, esp. in regard to static 3d content, is way overplayed.

1 Like

0:49 it’s the Lament Configuration!

I keep hearing about (mostly mild) nausea from people, fairly consistently, even when they’re trying out the gear in the offices of the VR developers, but perhaps this time it’s been overblown. The content issues do still seem to be there, and not just for static content. I’m not sure why anyone would want to use it for static content - people don’t really want to use 3D TV for that kind of stuff, VR is doubly unnecessary. Even for games and interactive experiences, however, there are issues. VR is cumbersome, doesn’t work well in the living room (i.e for console gamers), requires a whole new set of input devices (and the lack of a standardized input scheme becomes problematic), it doesn’t actually suit that many games, and non-game interactive experiences don’t seem sufficient to get people into the tech.
I’m still excited about it, as I was in the '90s (albeit somewhat more cautiously, now) , but I’m not sure it has the mass appeal that people think it does.

The nausea, from my understanding, is very much an artifact of certain demos and hardware configurations. It seems to reliably occur in software where there is lag (meaning the movements of the head and the stuff on screen don’t sync up) or in demos that change the view without the players involvement (though if it turns a cage the viewer is within as well so it gives them a solid reference frame, like in flight sims, this is fine to a certain extent). For the (admittedly rather few) examples of demos and software that get this right, and on a decent computer, the nausea problem seems to be about on par with traditional shooters or shaky cam as far as ability to cause nausea, if not better. The software is improving everyday and “acceptable hardware” is expanding as well, since that’s one of the Oculus teams primary goals at the moment. There will probably always be people who can’t stomache it, just like there are people who avoid 3d video games and movies with shakeycam even now.

At least from my experience with the Rift, it’s not really cumbersome at all? And works fine in my living room. I’m not sure what you mean by cumbersome, though? The viewing device is also the only new input device you need, although, like flight sims and their joysticks, some games and experiences will be benefit from dedicated peripherals they are hardly necessary.

I think the stuff will definitely have significant appeal to a large subset of gamers and sim-lovers, and will probably end up playing central roles in a variety of learning and scientific fields, as well as aiding in psychological treatments (where it’s already used in a cruder form) and military simulations. I don’t think it will be as immediately revolutionary as people think, but I suspect it will have a significantly larger niche than flight sticks and racing wheels, both of which are pretty stable markets, and it will improve and expand at a significantly more rapid rate. The support from many major players in the game industry (Valve, for example) is something it didn’t have last time around, in addition to not having the hardware and software capabilities we have now, and the affordability (it’s less than a new console) isn’t something to ignore.

Certain no-interactive 3d content might tangentially benefit from people having access to something that can display it for other reasons, though. Especially certain types of video where immersion is an important part of the experience.

1 Like
1 Like

There are important synergies with e.g. 3D printing, and CAD/CAM in general. You can see the parts as almost-real, optionally set them together and simulate, optionally using other parts for the setup as 3d-scanned objects. Then there are the molecular models, visualisations of crystal structures, viewers for 3d-microphotographs both optical and scanning-electron-microscope. And then there is virtual tourism, if the Street View and gigapixel-panoramas integration gets done well.

Edit: Another fun thing could be a walkthrough through metal structures. Take a piece of metal, polish, etch, scan. Polish away few more micrometers, etch, scan. Repeat (preferably by a robot) until a significant layer is polished away, much like it is done (but thicker, and possibly lower-resolution) with reverse-engineering (and fault-analyzing) of integrated circuits. Stack the scans/photos into a 3d structure. I reckon it would give much more “feel” for the metal (and material in general) structures than conventional 2d microphotos. Now view it in VR for the best experience. (Possibly too time-consuming/expensive for investigatory use but would be certainly great for teaching of material engineers.)

The end software will be different, but like the development software will be similar. Example, right now Unity is being used for 90% of the Oculus demos and Unity also outputs to PlayStation 4. There will likely have to be a separate SDKs, there is one for the Oculs Rift and I imagine there will eventually be one for for the Project Morpheus, Sony’s VR helmet and some tweaks will have to be done, but they will both share the same code & visuals.

Still you won’t be able to mix and match the end software anymore than you would with Android and iOS or say XboxOne and PlayStation 4.

That’s pretty much what I was afraid of. I hoped there’d be a chance to mix and match in the style of mice/monitors that can be used with any desktop. Guess there are no standards for VR yet so no interoperability. Expected that, hoped I am wrong.

Thought… could there be some device-emulation hardware that would e.g. take the HDMI signal and scale/transform/deform it to adjust it for another device? A FPGA board with a frame buffer, and some protocol transcoder for the sensors? Some sort of a compatibility layer?

It’s a bit of a shame that the author appeared to only test the original DK1 (Development Kit 1) and not the most recently released DK2 as the comfort (the term of art Oculus uses for describing nausea/sim sickness) is much improved.

This is due to a number of improvements:

  • DK2 supports 6DOF positional tracking while the old DK1 only had 3DOF rotational track - a sure way to make yourself sick in DK1 is to look down and move your head around - the world will move with it. Alternatively, try leaning in an out. DK2 (sub-millimeter positional tracking) basically fixes these issues
  • DK2 introduces a “low persistence” display that eliminates motion blur and judder. It turns out that this is a much more important feature than many people had assumed in increasing not just comfort, but visual fidelity (being able to read text in particular)
  • Refresh rate has been increased from 60Hz to 75Hz. The target for the CV1 (Consumer Version 1) is 90Hz+. This reduces flickering artifacts and also the motion-to-photon latency. The goal for “imperceptible” lag is to reduce latency to <20ms (DK1 is about 50ms+ with lots of issues depending on rendering path/engine). As a point of comparison most research/products from the 90s had latency in the hundreds of milliseconds…

There are definitely tons of unsolved issues with regards to comfort (some pretty hard problems w/ how the visual/vestibular/proprioceptive systems interact/conflict in VR), but these neuro-physiological concerns are in some ways more … orthogonal? inapplicable? separate? from the content/sociological/psychological concerns than the article implies.

In any case, many of the former issues brought up should be largely obviated by the time the public at large has a chance to use consumer VR. Certainly if it’s going to have the type of popular adoption/impact that VR proponents expect/would like.

2 Likes

Hey Leonard! Long time no see! Agreed – it’s a bummer that Jason didn’t get to try out our DK2, but as I noted in the beginning of the podcast, we just got ours at the lab and I’m still trying to get it properly configured. The DK1 worked perfectly with my 2011 Macbook Air, but the DK2 may be too much for the little guy to handle. I told Jason he should come back in a few weeks once I get the kinks ironed out (or a new machine)!