One-star ratings have worse grammar and spelling than five-star ones

I wonder if part of the issue is the quantity of reviews available for comparison. People are more likely to write about a negative experience than a positive one. I’ve written a few one-star reviews on Yelp myself, for services ranging from poorly-rendered to “tech was outright abusive.” And people probably write 5-star reviews when they really enjoyed something and want other people to know. There’s probably the lowest total number of 3-star reviews, because why write about a totally average experience?

5 Likes

VERY TEMPTED omg you have no idea

1 Like

Agreed. This is one bit of lack of formalism up with which i will not put!

6 Likes

I’ve always been bad at grammer and spelling. I was good at math so I became an engineer. Does that make me stupid? Does that mean the thing I’m rating poorly is actually good? Don’t succumb to the grammer nazis arbitrary ideas.

1 Like

If negative reviews are longer, doesn’t that automatically make them more error prone?

1 Like

I hate that when I call Amazon for seller support and talk to some poorly trained person, they are on pins and needles the whole time. They know that an automatically generated email will go out the minute the call ends asking me to rate my satisfaction with their support, and if I give them a negative review, they won’t last the day. That’s too much power over another human’s life to just casually hand out.

3 Likes

FTFY.

ETA: Or would that be Grammar Nazis’?

1 Like

Looks like they were measuring errors per word, so maybe not.

You still might hit a floor effect with very short positive reviews though. If someone makes, say, 3 errors in 40 words, that’s a 7.5% error rate. If you shorten that comment down to 10 words, it’d be possible to drop that error rate down to 0%. This wouldn’t represent a change in grammatical ability, just the fact that if you leave a very short comment, you’re less likely to demonstrate errors by chance, even if your rate of errors is unchanged.

You sound like my daughter; are you a grad student?

Designing a sentence, a paragraph, or a book is much like designing a hydraulic system or an airplane; there are rules/laws you have to observe or the resulting product is unfit for use. Spelling is important because the guy on the shop floor will, in the case of ambiguity, always misinterpret what you wrote in the worst possible way.

2 Likes

foooooorf!

But shouldn’t you get enough short reviews with high error rates (“Its awesome!” = 50%) to bring it back in line?

I guess that’s the question - does error rate correlate with review length or review score.

Yep. And we could split that up of course, by treating review length as a continuous variable, or binning it into several length categories, and seeing how much of the variance in grammatical ability is due to length.

I’d also argue that errors per word may be the easier metric, but it may be more informative to classify errors as errors of omission, or errors of addition, and, for the former, look at errors per opportunity for error. If someone writes in simple sentences, and doesn’t use any contractions or possessives, they could write a long review with a very low error rate, simply because the opportunity for a missed comma or apostrophe never comes up. You still get 0 errors per 500 words, but it’s weighted the same as someone who wrote 500 words, but had lots more chances to screw up. You could also account for homophones and “its/it’s” type words for error opportunities. The reason this would be helpful is you could directly compare error opportunities between positive and negative reviews. If positive reviews are short, this implies to me they have little to say outside of “Good product. Would buy again.” These sentiments (admittedly expressed as sentence fragments here) are direct and easy, and allow for simple grammatical structure. Negative reviews are longer, implying that they tend to go into detail as to how and why a product was bad. That typically requires a more complicated grammatical structure. Error by opportunity for error helps account for this, even if it may lower the sample size for the positive reviews.

3 Likes

I found spelling important because if the boss found a spelling error in the first sentance, they would reject your whole report. The shop floor would always call and ask me to come down and explain something. The boss didn’t. Thank god for F7!

this fycking websitge, goddamn its driving me crhist! aaaarrrghhhh.

Oh, man, there’s a ton of interesting* ways to look at this.
Once you start categorizing errors, you could start to look at the influence specific types of errors have on the reader’s perception of the reliability of the review.

Are certain types of errors more forgivable? Do some errors, as with the use of profanity, actually increase the perception of reliability? Do too few errors (what, two standard deviations?) imply that the ‘customer’ review was professionally written, as @nixiebunny suggested?

Are there differences among product types?

I’d love to spend some time with this data.

*YMMV

1 Like

I have bad hearing due to gun controls…
and hence bad smelling due to loss of hearing

this might turn up at the supreme court and the fda
at some time

It’s only a vague correlation. We can identify negative reviewers, but only with a high false positive rate. We don’t get to release the kill-bots. Awwww.

Has it occurred to anyone that fuckwits are more likely to give bad reviews because they’re less able to figure out how to use a product?

1 Like