Star Citizen's "terrifying face tracking" demonstrated


#1

Originally published at: https://boingboing.net/2018/10/15/star-citizens-terrifying-f.html


#2

I like that for all the crazy user interface design and industrial design that goes into the spacecraft, spacesuits and game menus, the shoe store looks like a shitty Staples that happens to sell shoes.


#3

#4

The greater the scope, the more obvious it becomes everything is made from the same Lego set.


#5

Are you sure that’s not face-fracking?


#6

Laugh it up, meatbags. Soon this will be the only reality you can access.


#7

Get the Monster Factory boys onto this immediately


#8

still better than Mass Effect Andromeda


#9

Star Citizen is the weirdest thing, it’s obviously never going to get finished, if that is possible at all. It’s the textbook example of feature creep and I suspect it might all be a ponzi scheme. A thousand dollars (or more) for a jpg of a spaceship? What the actual fuck? Yet people still buy it.

All I can do is shake my head in disbelief.


#10

I guess I’m not so eager deride neat new technology as others. I think that that face tracking is amazing and adds a massive amount of life to the characters. Sure, if you make a weird looking guy and make weird faces at stuff it looks weird, but uh, don’t do that if that upsets you?

I personally can’t wait to see this technology spread to more social MMOs. It is going to make the role playing experience with others so much more fun. Star Citizen might be of dubious scope and size, but this little piece is great. I can’t wait for it to expand out into social games. This will make immersion in social and role playing games so much better.


#11


#12

The face animation is pretty great, right now though the way its animating the eyes though definitely give it a dead look to the face. The eye tracking is decent but its not picking up the subtle quick eye movements (technically doesn’t even need to track those, it can just infer some small movement on its own), and the eyelids also stay open the same way on most of the facial expressions which makes it look uncanny. Still this is really cool tech, it’d be a wonder to see this game in a finished state someday


#13

Thumbs up for “yay-kits”. And for creeping out the Uncanny Valley-sensitive.


#14

I was promised “terrifying face tracking” but what I saw was “If you make a goofy face in real life, your avatar makes a goofy face in the game.”


#15

kinda wonder how much extra bandwidth it takes. not just streaming up voice and “large grain” movement like walking speed and direction - but all the details of all the faces around you. sync’d with what people are saying.

could totally see them having a tech prototype that doesn’t actually scale


#16

That was far better than I expected based on the headline. While he was able to mess with it, in general it looked not too bad. This is what Second Life should have been. :slight_smile:


#17

Would it have been so hard to let it blink? That constant, wild-eyed stare is 90% of the problem.


#18

The really tricky bit seems to be making that work for you rather than against you.

Having things behave uniformly can significantly improve verisimilitude: having things obey the same rules because they are the same things goes a long way toward breaking the impression that you are playing a connected series of minigames and setpieces, each rigged and stage managed in its own way. (of course, having things behave uniformly can also lead to people discovering that your attempt at realistic sight line modelling intersects with your lovingly detailed rubbish physics to make putting buckets on NPCs’ heads the last word in larceny; which is why games often are a connected series of rigged and stage managed minigames…)

It can also just be realistic: anyone playing video games probably lives in the aftermath of the industrial revolution, surrounded by cookie cutter recycled assets because economies of scale, interchangeable parts, and nonrecoverable engineering costs were all things well before video games were.

What I find tricky is figuring out why some examples fall so far on the ‘for you’ and some so far on the ‘against you’ side of the scale.

In this case, say, I’d agree that the shoe store’s ambiance leaves…a lot to be desired; but I can also see why shoe stores(and all stores) would look exactly like that in a context where organic materials other than extruded nutrient paste are an extravagant drain on the life support system; but the last bulk hauler from the nearest asteroid belt’s massive nuclear thermal smelter delivered a million tons of 18 gauge aluminum sheet and stamped sheet metal is now cheaper than cardboard and with better mechanical properties.

I don’t disagree with @HMSGoose on this one; I’m just not quite sure why this tacks over into ‘uninviting big-box office supply store’ rather than being a perfectly natural element of some sort of space station that appears to be tacked together entirely of white metallic slabs(and, while not pleasant, for some reason the rest of the station feels unpleasant in a thematically cogent way; like how the low bidder’s space station would feel; but the store sticks out).


#19

I’d be interested to know exactly how it is being done here, since that would obviously answer the ‘how much bandwidth?’ question pretty decisively; but it looks like the amount of data required isn’t necessarily that high.

Here, say, is the reference for face tracking in After Effects; and it’s a not terribly long list of small numbers tracking deformation relative to the face’s configuration in a chosen keyframe.

Even if we say you need 16 bits for each one(probably ludicrously high precision for most rendering scenarios) and 30 data points 30 times a second; that would be 14.4 kilobits per second. Back when the SupraFAXModem 14400 was pretty hot stuff that might have been more of a problem.


#20

What happens if you point the camera at the character on the screen?