Algorithms can write fake reviews that humans rate as "helpful"


Originally published at:


Great, so now Math is the enemy.

[note sarcasm]


I am in awe of your highly perceptice and deep comment.


Is that sarcasm?


Why would a fake review have sarcasm?


Just like a broken clock is right twice a day, even a fake review can be right sometimes.


Not if it’s a good review for a Michael Bay movie.


This is an ominous sign, since fully automated attacks on review sites could spell the end of reviews as an even moderately useful way to sort out otherwise impossibly long lists of potential candidates for your money and/or attention.

I’m not really sure it was all that good an idea to begin with.


Why would sarcasm have a fake review?


To be fair, humans interacting with technology for the perception of rewards (“you rated xxxx restaurants in the last xx days, you’re now a ____ app Supreme Wizard”) is just an extension of the Skinner Box. Once they’re conditioned to give a go/no-go to everything that goes past their eyes, is there any reason to be shocked that fake reviews are considered “helpful” by humans?

I’m not convinced that machines are becoming smarter so much as humans are becoming progressively more dumb.


Progress would be using automation to eliminate commerce so that people could get on with their lives instead of playing dead-end status games. Make money computer-only, and let them buy and sell everything elsewhere on their own.


Why would you think that the review process can be automated, but not the “helpful” votes?


This response makes a great point. Would read again!!! 4 out of 5 stars.


This topic was automatically closed after 5 days. New replies are no longer allowed.