My understanding, with the caveat that I haven’t used any of them yet, is that headsets like the Oculus Rift can use stereoscopic content, creating the illusion of depth by presenting a different view to each eye shot respectively by two different cameras.
In truth, this is also how your own eyes create the illusion of depth. The actual images produced by light hitting your retinas are two-dimensional, but your brain takes the slight separation of seeing something directly in front of you from two slight offsets and interprets the slight differences (combined with experience with shadows) as depth. This is also why you have better depth perception up close, because the angle of difference between your fixed eye-separation and the object your viewing is greater for nearer objects. When you close one eye, you still have some depth perception, because your brain is still making guesses, but it’s slightly impaired because its doing so from less data. It’s also why movement helps depth perception, because your brain can track how the two-dimensional shapes projected by light onto your retina shift, enabling guesses about the three-dimensional shapes the light bounced off.
When you go to a 3D movie, the frames are alternated between images shot simultaneously on two slightly separated camera lenses (though the lenses might be in the same housing just as your eyes are in one head). The glasses you wear have a polarized coating, and the alternating frames of the movie are projected in polarized light so that even frames can only go through one lens and odd ones can only pass through the other, recreating for your eyes the camera lens separation. I assume these VR headsets exploit the same principle, which is over a hundred years old, to present a 3D image. I also assume they use the same kind of gyroscopic motion sensors smart phones use to track head movement, so you can look around. It stands to reason that the filmmakers use multiple stereoscopic cameras to shoot multiple angles that the software automatically or the editor manually combines into a 180° or 360° panorama, or they may work like a stereoscopic version of an IMAX camera which uses a special lens to capture light in a wide arc that the software then corrects the contraction of (actually a pretty simple calculation in optics, but requires expensive high quality lenses and an extremely high-resolution image sensor to maintain clarity).
As is usual, I find the science behind technology a lot more interesting than actually using the technology myself, which is usually either boring or obnoxious (as in the case of 3D movies, which I avoid). Still, kudos to this Kickstarter for trying new things; it’s the only way to find out what ultimately works where the rubber meets the road.