Same guy did this video, too. Pretty amazing stuff.
It sounds like these were rendered in realtime. From his forum posts:
On my i7-3770 / GTX970, in editor (1080p), I get around 30fps for the forest and mecha scenes and around 60fps for the snow scene.
What is killing the performance the most is the sun shadows. On “epic” quality, it divide my framerate by two (this is why the snow scene run at 60fps).
So I guess it’ll probably be a while before this is standard but it’s pretty awesome that modern high-end rigs can do this. It’ll be cool to see it with more dynamic set pieces!
Seriously impressive, although for the realistic Mario he should have made Mario look like a realistic Lou Albano. That would have been awesome.
I’d love it if Kerbal Space Program were ported to Unreal Engine 4, but Squad’s totally committed to Unity.
Lighting and textures are more and more realistic, but a lot of those spaces seemed very awkward for Mario to navigate, required a fixed camera, etc. I get the sense we’ll continue to see the over-large, minimally-furnished, minimally-populated environments that dominate a lot of high-end games and create a kind of architectural uncanny valley.
I am sure this will help the stories on the games now…or not. The story will be crap, but the reviews will say ‘The graphics are so good!’
Plus, it probably can’t run Arkham Knight at 1080p.
It is being used for spiffy stuff. See also:
That music… Why did I keep expecting a half-naked Unreal Plumber to arrive?
So basically. my new computer is now obsolete, Thanks guys.
Great! I’ll finally be able to afford a London loft!
Unreal. Nice to see some live editing and insta-results.
Though of course a Real Game will have gigantic muscled meatheads in armor firing immense hand-cannons at gibbering aliens or terrists and explosions everywhere while they channel you relentlessly down the theme ride path. But the rocks being sent flying will look really nice.
In 20 years this will look like Pac Man.
I remember the very first scene of The Fellowship of the Ring - the big battle between elves and orcs - looking very impressive. Looking at it these days it’s almost painfully unconvincing.
Are you sure that your looking at Fellowship when it’s at its best? Overly aggressive compression can make things look less realistic than the designers intended.
Remember though, these scenes look incredibly realistic because there’s nothing of real complexity in them. Simple scenes will render really great, but add an object more complex than a rock, or add something animated - or say, a person - and it’ll not look all that different from existing games except some of the rendering of diffuse lighting might be nicer. Although even that might not even be true, because once you’ve added some real complexity to the scene, the engine is having to deal with rendering all that, too, so…
It’s the basic problem with graphics demos - they show something really simple that will render far more impressively than an actual game will, with all the demands that it has. Given the end of Moore’s Law and the negligible benefit of doubling polygons or texture sizes at this point, we’re not likely to see huge improvements in graphics without a pretty extraordinary technical breakthrough.
It’s impressive, and there’s lots that could be done with technology like this, but by making the direct comparison to Super Mario 64 they’re highlighting how this hyperreal approach is maybe not a good thing for games.
The more complex 3D environments get, the more it highlights how they’re just big glossy slabs of non-interactive pixels (that cost millions). Triple-A titles lean so much on how good they look in videos, and so little on innovative and well-crafted gameplay, that they’re gradually devolving into Dragon’s Lair. The things that made SM64 actually fun are now the exclusive domain of indie games, which often don’t use 3D at all.
It’s not the polygonal complexity that’s been holding back real-time rendering; we’ve been able to manipulate many millions of polygons on-screen since systems-ago, it’s the ambient lighting.
This is the reflected light in the environment that lights us up indoors for instance, with no direct light on us. This has been faked with textures and actual ambient lights added so far. Ambient lighting will also add its own subtle shadows and soften the shadows from direct lighting too. a single terracotta pot in a room will subtly change the colours around it, for example. It’s these massive amounts of lighting calculations that’ve kept us in Uncanny Valley.
Realism is a complete waste of time, money, effort, etc. We already have “real”, and I’d much rather people use computers to make something else.
Striving for realism is what allows us to create simulators with predictable outcomes.
This seems more likely to be a happy coincidence, with “predictable outcomes” being a vast superset of what people generally expect to see. It’s one way to get there, but it IMO results in less interesting art, and ignores the possibility of creative synthesis, as contrasted to simulation.
This is certainly true, but my point was more that once we add something of complexity on screen - a seagull, say, it becomes much more difficult to present it without tripping an instinct that recognizes it as fake because the modeling, texturing and animation have to be perfect (whereas a rock that is “off” just looks like another, different rock), and the modeling, texturing and animation requirements for a realistic seagull (much less a human being) are orders of magnitude greater than those for a realistic rock. The improvements in rendering only work to increase “realism” so long as there’s nothing in the scene that doesn’t break the illusion in some (other) way - and that’s not (just) a rendering issue, as the not-convincing CGI humans in movies can still attest.