I can imagine that something like this might be possible, but I have an overwhelming feeling that these guys haven’t achieved it. In the process of developing this application, they would also have to conjure from thin air a solidly reliable system for classifying personalities. That would be Nobel Prize kind of stuff.
Well that’s a bit of a silly statement, don’t you think? Statistics has been used to predict things for ages, and “big data” is really just an extension of that. “Big data” tells us things like, I dunno, climate change is real.
What is it that we can see in faces?
Faces don’t often contain bananas, so why do we keep looking at them?
Isn’t there information to be gleaned?
Which definition of intelligence is being used?
Personality doesn’t affect bone-structure.
Some hormones do.
Hormones also affect personality.
Yeah, it’s bullshit at this point.
But I’m not convinced that I can’t see vapid stupidity in some faces.
If I really can do that (unproven), a machine could conceivably learn to do that.
No?
Machine prejudice, working or not, must be determined to not be legal means to assess persons mental and psychological attributes.
I usually really value your posts, and missed them dearly, but in this I disagree.
So-called big data is just data, usually quite unsorted and badly standardised, but data. And you can apply stats to it. Machine learning is just stats. However, like with all stats: if your hypotheses are bullshit, your predictions will be bullshit, too. And if you apply the wrong tool to your data (usually i.e. violating statistics assumptions), your predictions will be worse than bullshit.
But that you “can’t predict anything at all” is just not correct, from where I stand.
Big data was originally associated with three key concepts: volume , variety , and velocity . Other concepts later attributed with big data are veracity (i.e., how much noise is in the data) and value.
Well that is more what I was referring to. However I was mostly trying to make a rhetorical remark, for which i may have… missed the mark.
Yes, “big data” tends to imply very big (duh) non-structured data sets that may represent a variety of types of data. And then we try to coax the algorithms into finding correlations, groups, all that fun stuff.
IANADS but it is an area I have interest in both personally, and professionally. I will attempt to be less cavalier in my descriptions of the technology when making rhetorical arguments moving forward.
Even if we abandoned statistics entirely we’d still predict things. The whole idea of science is to create theories with predictive power. I can still rely on rocks to fall towards the earth when I let go of them, regardless of stats. But I really question that statistics has much predictive power at all.
Statistics tell you about how things are and how things have been. They also tell you how things will be to the extent that the future is relevantly like the past. Predicting the future with statistics is a huge minefield of is-ought fallacies and mistaking correlation for causation. Predicting human behaviour from statistics runs into even more problems: culture changes, people change.
This AI may be flawed for all kinds of reasons. Most likely it’s fed with bad data. But one of the biggest problems is that our ability to judge each other’s intentions and personalities is developed adaptively within the context of a particular culture. My ability to detect deception is countered by your ability to deceive. AIs that appear to be better than humans are usually dramatically worse but only in a very select way. It’s like those glasses that fooled face recognition. One of the things that made the AI so successful was zeroing in on only the key features that were most relevant, but that meant not doing any second guesses or double checks: if a thin area around the eyes looks like Scarlet Johansson then it was Scarelt Johansson, never mind that the rest of the face was an Indian man with a beard.
So an AI that is good at telling people’s personalities is going to rapidly start failing the moment that humans start having any interest in fooling that AI. And AIs we build off historical data to try to predict the future will start failing badly the moment the future stops aligning with the past in some relevant way, and we will have no idea what that “relevant way” even is.
None of that makes AI useless or bad. But I’ve been in a lot of meetings with a lot of excited people who don’t realize that their AI is doing this:
I tend to push back against the narrative the AI is useful at all because I think it ought to be used only by people who actually understand the fraught nature of the information being produced by it. Not by people who see “70%” and say, “Wow, that’s pretty much every time, and we know that for sure because a computer told us!”
Before they moved on to the Faception gig, this particular troupe of clowns were peddling some software that would take 2D TV footage and convert it into 3D images (frame by frame, in real time). They foresaw great applications in the surveillance / security industry.
LOL! Yes, being able to build up the person’s face from a photo of the backside of their head would be quite useful! Too bad the unicorn piss and unobtainium needed to make it work are in such short supply.
Just very shortly, don’t know where and when I will find time for nuanced reply: while I agree on the specific case, and also on applying AI to classify human personality, I think your skepticism towards stats and ML might be too broad. At least your statements seem to generalised: