Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Negative reviews can generally be tracked easily by registered user. It seems like a good enough idea that the force of negative reviews should fall off exponentially with respect to frequency per user that I'm surprised its either not done more, or more apparent.

If someone complains all the time about everything, the people around them quickly learn to ignore it. Our systems should too.



sort by: page size:

That would be an easy way to get rid of negative reviews.

I wonder if you could utilize this information. Perhaps negative reviews could get more weight if they're from people who have also left positive reviews.

You don't think removal of negative reviews is a problem?

There already is an easy way to get rid of negative reviews. Just create a new product. That’s what they do.

What about deleting negative reviews?

Ding ding ding!

You basically can't allow people to give negative feedback for a thing and have that feedback mean anything (ie affect recommendations for anyone but you or show it to other users) without insincere feedback being used to hurt the reviewee.

There are ways to counter this, the easiest is to not show negative reviews but count them positively, a dislike actually boosts them just like a positive review would. Not really recommended due to promoting rage bait but brigading would stop working.


I can vouch that negative reviews can be hidden, my friend wrote one and he sees "23 reviews" including his own, when he is logged in. Everyone else sees "22 reviews" and his is omitted.

Not a bad point, I hadn't thought about that, but you do still have the option of flushing all reviews with an update if they're primarily negative.

Who's to say they won't start doing that in the future, where a competitor could purchase the negative review? And, what about my second point, which applies to any kind of review?

Paying people for negative reviews probably isn't the best strategy.

> One of the big hurdles to overcome on a user ratings site is that people are most motivated to write reviews when they are outraged. There is a massive selection bias, and if nothing is done to alleviate this, your site runs the risk of becoming a soap box for ranting.

This is the risk we take listening to ANYTHING depending on voluntary responses. Burying negative reviews to attempt some intrinsic rating dismisses the value in actual user input, with the paternalistic view that the review aggregator somehow knows better. Who decides which and how many negative reviews to hide? You? How about letting the user figure it out, knowing that trusting any opinion is a risk and learning from experience to calibrate reviews he reads to himself? It's the best we can hope for without resorting to the review site, in effect, clobbering consensus views to get something it "knows to be more accurate." In that case the users no longer review, but the site, which is not a user, does.

> If you receive 2 bad reviews and 2 good ones, that does not mean 50% of people are dissatisfied.

You absolutely do not know this. It could be that 70% were dissatisfied. You can't assume this without taking the role of a reviewer, which you cannot legitimately do without experiencing the item being reviewed.


I agree that this is a dark pattern, but I also empathise with whoever first implemented this. Negative reviews are often just "this doesn't work", no further information. That's not actionable at all as a developer, and even if you somehow do fix the underlying issue, it's pretty difficult (or impossible) to get people to update their reviews.

I agree that negative reviews attract a lot more scrutiny. I've also had negative reviews scrubbed.

Just refreshed the page after some 50 minutes. At the current rate there are about 200 negative user reviews per hour.

Yeah, I just realised a few negative uses for 1 right now. Like those cases where a business threatens to sue anyone who leaves negative feedback, and hence such a tool could be used to unmask anonymous reviewers giving a honest evaluation of their products/services.

Or maybe an odd case where it turns out the author of a book or creator of a product finds out someone they know in real life left the negative review and physically attacks them or something.


The problem is, a continuous, consistent pattern of negative reviews can just as easily come from someone who has it out for a business.

I have a friend who is currently a target of such a vendetta. He got into an edit dispute with someone who was trying to censor stuff on a wiki. This other person responded by flooding listings of my friend's previous employer (which has been out of business for a few months) on nearly every review site out there with a continuous stream of utterly fabricated reviews. These reviews range from calling him a pedophile (easy enough to get deleted) to pretending to be a client who was screwed by his incompetence. I'm not sure there is any way to differentiate the latter kind of reviews from genuine negative experiences.


When I labeled the comments, I didn't label books that were criticized. So in theory the model should filter out negative reviews. But currently the training dataset is pretty limited in size so you still can see some negative ones. I suspect that with more training data this problem will go away.

It seems like maybe the solution is not to remove reviews, but weight them somehow. 100k reviews left in 2 days probably shouldn't have the same weight as 100k reviews left over 2 years.

Or a more sophisticated version of this would be to somehow cluster reviews based on the particular issue they are reporting.


People who write inadequately negative reviews should be punished too.
next

Legal | privacy