What’s an example of this? What game screengrab did you post above?
Are you talking about fixed point clouds that look sparse up close? Or ray marching through voxel density and making solids out of very dense fluid rendering? Or sampling point clouds at voxel edges and processing them into polygons sort of the way zbrush dynamesh works or the way “blobbies” get surfaced?
Game devs talking about the wild hacks they had to use to make their game do something seemingly straightforward is one of my favorite things. Whether it’s “I cheesed the sprite engine with huge amounts of math just to make geometry fading work” or “we didn’t have time to build a rail car controller, so we glued the train to your arm and recycled the player controller to control the animation”, the stuff that goes on behind the scenes to get a game out the door is bonkers.
The screengrab is from the 1992 flight sim “Comanche.” It always stuck with me because of the nice organic voxel landscapes, which were a real departure from the barren landscapes of other games. The later “Outcast” had a similar (pseudo-) voxel landscape.
With modern engines, there’s this weird convergence.
Things like Voxel Farm (used for Everquest Landmark) are using UV-textured voxel data for environments but turning everything into polygons for interactions (e.g. if a chunk of the landscape is detached, it’s turned into polygons) and rendering (converting surface data into texture maps). So it works to create voxel data sets for Unity and Unreal, etc., but you can also alter everything in realtime, which doesn’t work so well if it’s all stored as polygonal data.
Then you have modern game engines “voxelizing” non-voxel world data to figure out occlusion and global illumination (e.g. Witcher 3, Destiny, CryEngine).
I think in large part, it was because polygons were more efficient for the sorts of things game developers were doing at the time (especially given hardware limits). But now, not so much - it seems like parallel processing and large amounts of memory allow voxels to come into their own.
Not necessarily. With something like Voxel Farm, that’s rendering as polygons, you can see that the “voxels” can be more than just cubes - because each point has all sorts of data associated with it, voxels can have complex, curved shapes:
But even with pure voxel rendering, the voxels can be damn small, measured in millimeters e.g.:
It doesn’t end up being visually too dissimilar to high-resolution polygons, where the resolution of the images used for texturing and bump-mapping can make it grainy up close.
This is voxels, but you can’t really tell:
And then there are interpolation tricks that can make surfaces smoother still, beyond the resolution of the voxel.
So this is making voxels quite attractive (even though the entire workflow for game development is based around polygons) because there’s the things you can do with voxels that are extremely difficult/functionally impossible to do with polygons, e.g.:
It’s worth remembering the one big weak point of voxels though: animations. This is the biggest reason why you only really see voxels used for terrain and static stuff now, and why back in the 90’s you only saw it used (for example in FPS games like Blood and Shadow Warrior,) for things like item drops. With a polygon you can store an animation really simply: for every vertex(corner of a triangle, in this case for the others ) in the model, the next frame of the animation is just the new position of that vertex. If you have a new set of vertices and, say, an amount of time you want the animation to go over, you can smoothly go from one vertex to another over a period of time using interpolation. There are a LOT fewer vertices than there are pixels, so this works really well. In an ideal voxel game, you want 1 voxel per pixel (or more, because of sampling techniques) so for each “frame” of animation, you need to derive or store the location of every voxel. I do wonder how a skeletal animation system could work with a voxel system though…I assume not amazingly considering how nobody (afaik) does it, but still.
That sounds suspiciously like a foray into finite element analysis(especially if you have the skeletal animation systems with at least some physics simulation rather than pure motion capture in mind). Which is definitely a thing that people do, sometimes with excellent results; but not so much a thing people do in real time or better on cheap hardware.
Not necessarily, I was thinking more just in terms of a storage perspective: this voxel is always at position x relative to its “bone voxel”, which greatly reduces storage. Assuming a true voxel system (everything fits exactly in the global voxel grid) I don’t think the results would look too bad.
Yeah, voxels are ideal for static geometry, but - somehow - Atomontage has vehicle and character animations in their engine.
I’ve seen supposed skeletal animation demos of voxel models from others. I don’t know how they do it - I think they might necessarily be converting to polygons at various stages of the process, though.