Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Social Networks The Internet

People Are Complete Suckers For Online Reviews (nypost.com) 162

schwit1 shared an article from the New York Post: No reviews, no revenue. That's the key takeaway from a new study published in Psychological Science, which finds that if two similar products have the same rating, online shoppers will buy the one with more reviews... "[When] faced with a choice between two low-scoring products, one with many reviews and one with few, the statistics say we should actually go for the product with few reviews, since there's more of a chance it's not really so bad," wrote researcher Derek Powell of Stanford University, lead author of the report. In other words, when there's only a handful of reviews, a few bad ones break the curve and bring down the overall rating. "But participants in our studies did just the opposite: They went for the more popular product, despite the fact that they should've been even more certain it was of low quality," he wrote.

Matt Moog, CEO of PowerReviews, previously conducted a study with Northwestern University [PDF] that drew from an even larger data pool of 400 million consumers, which also found that the more reviews there are of a product, the more likely it is that a customer will purchase that product... He has also found that customers who read reviews often click the bad ones first. "They want to read what's the worst thing people have to say about this," he said... Most online shoppers (97 percent to be exact) say reviews influence their buying decisions, according to Fan & Fuel Digital Marketing Group, which also found that 92 percent of consumers will hesitate to buy something if it has no customer reviews at all.

This discussion has been archived. No new comments can be posted.

People Are Complete Suckers For Online Reviews

Comments Filter:
  • People are sheep. News at 11.
    • by Spazmania ( 174582 ) on Sunday August 27, 2017 @02:35PM (#55093899) Homepage

      The three main kinds of mistruth.

      With more reviews, the buyer has a better idea -exactly- what's bad about the product thus has a better chance of making an educated decision about buying it.

      The article and study are examples of misusing statistics. The correlation between number of reviews and purchases niether tests nor demonstrates a causal relationship from the former to the latter, and even if it did it does not demonstrate the -claimed- causal relatinoship.

      • by war4peace ( 1628283 ) on Sunday August 27, 2017 @04:15PM (#55094227)

        Exactly. Furthermore there is a problem with the methodology:

        This bias was so strong that they often favored the more-reviewed phone case even when both of the options had low ratings, effectively choosing the product that was, in statistical terms, more likely to be low quality.

        That is incorrect.
        Say you have product A and product B, each with a score of 3 stars out of 5.
        Product A has 6 reviews, 3 of which are 1-star and the other 3 are 5-star - the average is obviously 3.
        Product A has 600 reviews, 3 of which are 1-star, 594 reviews are 3-star and the other 3 are 5-star - the average is obviously 3, again.

        Based on that information alone, you can NOT determine which product is more likely to be bad. You could however determine which product is more popular - but that's it.

        Furthermore, if both products have a low score, it is actually better to go for the more popular one simply because more people bought it therefore more troubleshooting, more info about how to overcome its shortcomings and more workarounds would be available online for that product. Shortly put, it would be more likely to find someone else who had a similar problem with the product and posted a solution online.

        • by clovis ( 4684 ) on Sunday August 27, 2017 @05:31PM (#55094437)

          And I suppose it's my turn to be the guy that posts the related xkcd :
          https://www.xkcd.com/937/ [xkcd.com]

        • by ShanghaiBill ( 739463 ) on Sunday August 27, 2017 @06:00PM (#55094503)

          Also, the fewer reviews, the more likely those reviews are FAKE. Most of the statistical claims made in TFA are based on the implicit assumption that all the reviews are equally valid. So the real problem here is not dumb customers but dumb researchers.

          • Re: (Score:2, Informative)

            by Anonymous Coward

            Not to mention all the reviews that are complete garbage. I remember a few years ago and I saw a bunch of 1 star reviews for a product, curious I checked and almost all of them were not only complaints against the shipper, but they were all along the lines of "product works perfectly fine but shipping took too long due to that hurricane. 1 out of 5 stars." That's right, they dinged the product (which actually had many sellers) because of shipping delays due to Superstorm Sandy.

          • by David_Hart ( 1184661 ) on Monday August 28, 2017 @12:39AM (#55095473)

            Also, the fewer reviews, the more likely those reviews are FAKE. Most of the statistical claims made in TFA are based on the implicit assumption that all the reviews are equally valid. So the real problem here is not dumb customers but dumb researchers.

            I'm willing to bet that it also doesn't factor in version/software changes, the fact that many reviews includes multiple models, and for every detailed excellent review, there are many stupid reviews (they ordered the wrong thing in the first place, do not know how to use it, had a problem with customer service, etc. stuff unrelated to how the product works).

        • Re: (Score:2, Insightful)

          by Anonymous Coward

          It's more important to see latest reviews. Often scumbag manufacturers will replace a good product with a cheaper shittier version and ride the popularity due to the early reviews to the bank. That's why well established products should be really scrutinized.

        • Knowing almost all places have a few fake good reviews from friends and family, the place with more reviews is more likely to be genuine.

        • Say you have product A and product B, each with a score of 3 stars out of 5.
          Product A has 6 reviews, 3 of which are 1-star and the other 3 are 5-star - the average is obviously 3.
          Product B has 600 reviews, 3 of which are 1-star, 594 reviews are 3-star and the other 3 are 5-star - the average is obviously 3, again.

          Based on that information alone, you can NOT determine which product is more likely to be bad.

          Sure you can, since both you and the researchers have—I believe, wrongly—assumed that good products are just as likely as bad products, which is fine in theory, but doesn't hold up in the real world any better than a physicist's example that starts with "consider a perfect sphere in a frictionless vacuum". In the real world, bad products outnumber good products by a wide margin, and we're well-served in relying on past experience to shape our future decisions.

          With 600 reviews establishing a 3-st

          • By being "fairly certain", you ASSUME rather than DETERMINE.
            I said "You can't DETERMINE" - which you can't. You can, however, ASSUME (and still be wrong about it) that product A is just as likely to be bad, albeit with a much lower degree of confidence.

            • Determinations aren't separate from assumptions: they're based on assumptions. Moreover, depending on your assumptions, your determinations may be uncertain or wrong.

              But let's take a step back for a sec, since I want to get on the same page about something else you said. I think that the distinction you just tried to make is between choosing with certainty ("DETERMINE") and choosing despite uncertainty ("ASSUME"). You're saying you were talking about determining a sure thing, whereas I was talking about mak

              • Understood. English isn't my native language and at times there are some communication inconsistencies, which I struggle to minimize but they do happen nevertheless.

                "you can NOT determine which product is more likely to be bad" needs to be rephrased, of course.
                What I wanted to say is "you can NOT determine which product is worse, based on reviews".
                Meaning "if you assume any of the products being worse than the other one solely based on that data, it would be akin to guessing".

                I hope that clarifies things an

                • No worries, and yup, I think we're on the same page. I agree that the reviews in your example, when taken by themselves, are insufficient for making any sort of reasonable determination one way or the other, so any choice would amount to making an uneducated guess. Which isn't to say that we can't make educated guesses, but to do so we would need to incorporate additional information, which goes beyond the scope of what you were trying to address in your original post.

                  Also, let me just say: your English is

                  • Thank you :)
                    I've been told that a lot in the past, and with foreign languages the secret is to remain immersed at all times. Watch movies in that language, talk to people in that language, etc. Best would be to live in a foreign country for a while, you'll learn that language in no time.

                    As for me, working for an USA-based company (they have a large development center in my puny East-European country), it's easy. Most good movies, music and online communities being all in English helps tremendously as well.

    • by Anonymous Coward

      ...suggestions for competitors products. I then make my own determination after checking them out.

      • by TheRaven64 ( 641858 ) on Monday August 28, 2017 @04:00AM (#55095801) Journal
        Mod this up. Most reviews are useless when comparison shopping because the people writing them have a sample size of one. I recently bought a new electric shaver. Most of the reviews are from people who have owned maybe one other shaver in their life. For a product with 600 reviews, 500 of them are left within a few hours of the new one arriving, so all that you really know from them is that it came in a box and didn't break in the first use. The only useful reviews were the ones where someone actually compared it to others that are still available.
        • Amazon wants you to review a product as soon as you open the box. There is. Ever any follow up to elicit a review after you have used the product for a while. The people who do review after experience are the ones who had trouble with the product.

    • when they start pointing and clicking, they "think" it makes them intelligent. i am so glad there is not a test you have to pass before you can purchase an electronic device ie: a computer to put in your home, let alone get on the Internet. we would all be out of a job and the ceo's of oem's would say wtf.
  • bad news sells. more bad news sells more.
    • by Anonymous Coward

      If i want a product, i pretty much know what it is supposed to include, but i want to know if the promised features and quality are as claimed. That's why i check the bad reviews first. And most of the good reviews are pretty much "works for me, thanks". That does not tell anything really.

    • by Anonymous Coward on Sunday August 27, 2017 @01:59PM (#55093777)

      I always check the bad reviews and only the bad reviews. Good reviews generally aren't helpful, they're either fake and posted by the seller or genuine but not useful. Seeing twenty 5 star reviews that say "it worked great for me" is great, but what I really want to know is how it failed for people and if I care about those failures.

      If all the bad reviews are by people who are clearly crazy or doing something stupid, I can be fairly confident in the product. If they instead reveal significant flaws, I may want to reconsider.

      As always, XKCD has a relevant cartoon about this [xkcd.com].

      • Pretty much this, the exception being movie reviews. There I tend to read both the positive and negative ones. Well, until recently. Is it me or is there something fishy going on at IMDB? New movies receiving positive reviews, written by long time members, yet uncannily smelling like something written by studio drones, using words you'd expect from a marketeer rather than from a movie enthusiast. There's an odd sort of similarity to these reviews.
  • More Complex (Score:5, Insightful)

    by zieroh ( 307208 ) on Sunday August 27, 2017 @01:38PM (#55093709)

    I think it's a bit more complex than that. Oftentimes, you can determine from poor reviews exactly what the shortcomings are, and decide if those shortcomings affect your intended use of the product. If a competing product has no reviews, then you have no way of knowing what the shortcomings are.

    • Re:More Complex (Score:5, Insightful)

      by TWX ( 665546 ) on Sunday August 27, 2017 @01:52PM (#55093753)

      Correct. What went wrong for other people? Why did it go wrong? Will this function impact me?

      As a case in point it's not really possible to buy an inexpensive TV that lacks "smart" features anymore. Smart features are very inconsistent from manufacturer to manufacturer. Thing is, if I never connect my TV to anything but an antenna, do those smart features matter? I need to read other users' experiences with a given TV or a range of candidate TVs to see what features work and what don't, and just because a TV gets poor reviews doesn't mean that the functions that I'll use are the ones poorly reviewed. It could be that the Internet connectivity stuff is what's garbage, or like the one I actually bought, the stock remote sucks but if I get a $10 remote from the previous model TV, I get 80% of the features back that normally require a cell phone with Bluetooth or other "smart" feature to work.

      • Re:More Complex (Score:5, Insightful)

        by Anonymous Coward on Sunday August 27, 2017 @02:19PM (#55093845)

        Correct. What went wrong for other people? Why did it go wrong? Will this function impact me?

        It's striking me as somewhat ironic that a couple stanford researchers appear to have missed the significance of having thorough documentation.

        Because that's basically what you're getting with masses of reviews readily available.

        • Al Dufuq:

          the statistics say we should actually go for the product with few reviews,

          I pick the most reviewed item with the same review because if it's 5 reviews I have no idea if the failure rate is 1/100 or 1/6. If there are 10,000 reviews I feel pretty good that if there is 8% 1(star) reviews then it probably has about an 8% failure rate.

          What statistician says "you should study with the smallest sample size all else being equal"?!

          • What statistician says "you should study with the smallest sample size all else being equal"?!

            That's not what they said. What they said was that if you have two products with the same low review, the one with fewer reviews has a higher chance of actually being good. Consider the case of a 1-star item with a single review vs. a 1-star item with 1,000 reviews. The first case could just be failure to understand the product, a bad shipping experience, or even a data entry error, whereas the second case indicates widespread dissatisfaction.

            In other words, the smaller sample size is more likely to be wron

            • by TWX ( 665546 )

              Why does it have a higher chance of being good? I see not enough data points if there are not enough reviews. Its quality is ill-defined, not good.

              • I see not enough data points if there are not enough reviews. Its quality is ill-defined, not good.

                In the absence of any context I might agree with you, but we can make inferences from experience with other similar products to reason about the probable product quality even if there are no reviews yet for that particular item. The initial estimate is going to be along the lines of "average quality"—perhaps a bit lower than average if it's a new, untested item. A thousand reviews with a mean rating of one star would suggest well below average quality.

                Note that I never said that a dearth of reviews me

      • Comment removed based on user account deletion
        • by TWX ( 665546 )

          It means I research the significance of the left phalange, so that I can determine what issues with the left phalange mean for me. Knowing that the product has a left phalange means I have a starting point for further research, and I may find that it doesn't matter if the left phalange will fail under particular circumstances if those circumstances are unlikely or impossible in my application.

          When I did QA testing I had to create plausible scenarios for my testing. This meant researching how something rea

    • Re:More Complex (Score:5, Insightful)

      by Mr D from 63 ( 3395377 ) on Sunday August 27, 2017 @02:11PM (#55093813)

      I think it's a bit more complex than that. Oftentimes, you can determine from poor reviews exactly what the shortcomings are, and decide if those shortcomings affect your intended use of the product. If a competing product has no reviews, then you have no way of knowing what the shortcomings are.

      I read reviews, but only really pay attention to ones that make sensible explanations of what they do or don't like. Positive reviews can be very helpful if you are looking for a certain functionality, but they have to be precise and not cheerleading. A lot of reviews often means there are more good ones to find, more useless ones as well. For the most part, when I read reviews and then purchase, I wind up getting pretty much what I expect. You can never eliminate the possibility of a surprise when buying products you've never tried before.

      • Re:More Complex (Score:5, Informative)

        by omnichad ( 1198475 ) on Sunday August 27, 2017 @05:09PM (#55094365) Homepage

        Negative reviews by morons actually tend to be extremely helpful. You know exactly why it was rated poorly (they didn't read directions / too dumb to use it). I find this especially true for restaurant reviews, where they describe something "wrong" with the food that really means that the food was made correctly and they've just never had a good version of that food.

      • by Anonymous Coward

        Oftentimes in *bad* reviews, people will suggest features they wanted but did not get. That too can be a deciding factor whether or not I want one product over another.
        Furthermore, reviews can also advise you on the update policies, and/or competence of the provider. Some products get better, others get worse.
        The beginning of a new product is almost always prone to problems, and using a new product instantly makes the user into a beta tester. Using long and well established products usuall

    • Exactly. I'm trying to find a good tablet only to watch videos when I'm away. I don't care that Android is stuck at v4, the CPU is too slow for modern applications, there isn't enough RAM for multitasking, the GPU can't run games at decent framerates or that wi-fi is unreliable unless you happen to be within five metres of your router. All I care about is battery life, screen quality, video playback, video/files management and expendable storage via SD/microSD.

      So far the best one seems to be the Samsung Gal

      • by Zxern ( 766543 )

        If you're only going to watch videos, then you probably don't want a Tab 7 E Lite. The display quality is terrible and low resolution. You'll want the shell out the extra bucks for a pro version with the far better display and higher resolution.

        Unfortunately Android tablet makers won't put a good display on a cheap tablet, they only put them on the top end devices.

      • You could even go down to a Kindle Fire and install the Play store for a pretty good price. It also has microSD support. I'm too cheap to even buy that, though, so I don't have firsthand experience.

    • Re:More Complex (Score:5, Interesting)

      by Wycliffe ( 116160 ) on Sunday August 27, 2017 @03:09PM (#55094031) Homepage

      If a competing product has no reviews, then you have no way of knowing what the shortcomings are.

      Not only this, but I'm always suspicious of a product with very few reviews. If people are actually buying the product then there should be some reviews. If there are no reviews, I start to suspect it to be a fraudulent listing.

  • Obvious (Score:5, Insightful)

    by burtosis ( 1124179 ) on Sunday August 27, 2017 @01:42PM (#55093725)
    If one product has 12 reviews and one 45k reviews, the 12 review has 5* and the 45k review has 4* I'd be more likely to buy the 45k review one. It's simply significantly harder to astroturf and bot 45k reviews than 12. Online typically all you have to go on is a shitty picture, product details that are nearly always incomplete and exaggerated, and reviews. The reviews are the least shitty way to not get ripped off. For the record, yes I know that many reviews on most every site are fakes, paid for the review, or bots.

    Plus sometimes the reviews are pure comedy gold and are worth reading on their own which can inspire some sales on their own.
    • Re:Obvious (Score:4, Informative)

      by tomhath ( 637240 ) on Sunday August 27, 2017 @02:19PM (#55093847)
      TFA noted the same thing:

      The exception to that rule is if every one of the reviews is giving this place or product five stars. “If the rating is unusually high, that actually can have a negative impact,” said Moog, as shoppers suspect this is too good to be true. “What we have from our data is that the optimal rating is about 4.4 stars.”

      • Re:Obvious (Score:4, Interesting)

        by 93 Escort Wagon ( 326346 ) on Sunday August 27, 2017 @03:00PM (#55093985)

        I am not sure that paragraph actually indicates that the author understands astroturfing - he seems to still be talking about the reader's state of mind.

        Also, I'd argue the author's understanding of statistics is flawed, since he apparently thinks a bell curve only has one side.

    • Re:Obvious (Score:5, Insightful)

      by _Sharp'r_ ( 649297 ) <sharper AT booksunderreview DOT com> on Sunday August 27, 2017 @02:20PM (#55093849) Homepage Journal

      Yeah, the study appears to naively assume all online products, sellers and reviews are completely legitimate and that reviews solely indicate a statistical level of quality and aren't influenced by other factors.

      Any product with tons of verified reviews means at least that the product has survived and sold enough to gather those reviews. That's an endorsement it's very difficult to fake.

      Now if they actually purchased and tested products themselves to determine which was better as part of the study, they might have a decent conclusion, but as all they did was make a statistical assumption then go judge people's rational behavior against their assumption, the study's conclusion is way off. They're trying to make a case about statistical uncertainty and they refuse to believe results based on people's actual experience with purchasing products online.

      This is a signal to noise issue. People are ignoring other factors and trusting lots of reviews because they're searching for the signal within the influenced-by-seller noise.

      • by Kjella ( 173770 )

        Any product with tons of verified reviews means at least that the product has survived and sold enough to gather those reviews. That's an endorsement it's very difficult to fake.

        Exactly. Crap gets bad reviews, few sales and the product is discontinued. If a poorly rated product continues to sell it's probably low quality for a low price and the complaints basically boil down to expecting more than you'll reasonably get at that price point or it's a niche product rated poorly by the mainstream because they don't understand who's the target market. Basically it might not be a terrible product for you, if your requirements are low or you happen to fit the niche. A new product that's r

        • by Rakarra ( 112805 )

          P.S. These guys should really do a study on IMDB movie scores... they might learn a thing or two about the real world.

          Speaking of which, I've found them to be -generally- ok for mainstream, but only for one+ year old stuff. IMDB scores are wildly skewed when movies first come out do to the self-selection and fanboy bias. If the movie is "political" or controversial in any way (like Ghostbusters 2016) expect the score to be way off as well.

    • Re:Obvious (Score:4, Insightful)

      by OzPeter ( 195038 ) on Sunday August 27, 2017 @02:44PM (#55093935)

      If one product has 12 reviews and one 45k reviews, the 12 review has 5* and the 45k review has 4* I'd be more likely to buy the 45k review one. It's simply significantly harder to astroturf and bot 45k reviews than 12.

      Personally I look for reviews in the middle of the pack to get a good sense of what the product is actually like. I see a lot of 5s that appear to be people gushing about a product and a lot of 1s where it seems to be all about trashing a product. I feel that the 2s, 3s and 4s give a more balanced perspective of a product.

    • by k6mfw ( 1182893 )
      Regarding 5 star reviews, I read an article about hotel reviews and it said ignore those hotels that call themselves a 5-star hotel because there are less than 200 of them in existence. You say reviews are comedy gold, reminds me of Yelp reviews of Blue Pheasant in Cupertino, CA. Some say there's a lot of 40 and 50 somethings acting like 20 somethings (which is not a bad thing when cooped up in a cubicle and battling traffic jams all week). Or some write "there's a lot of working girls" which just because s
  • by Vermonter ( 2683811 ) on Sunday August 27, 2017 @01:48PM (#55093741)
    If something has 1000 reviews, I know that the bulk of those are likely to be legitimate. If someone has 3 reviews, I have no idea how accurate the reviews are. More reviews = more sample data of user experience, and more data means that "wrong" reviews (reviews that don't reflect the user experience) are obscured. If you were presented with 2 studies where one used a sample of 5 people and the other used a sample of 5000 people, which results would you trust more? Reviews are just a less controlled study.
    • by Gr8Apes ( 679165 )
      Ah yes, but this says that 2 items have a 1 or 2 star review, one with 1000 reviews, the other with 3. Which one is more likely to suck in this case?
      • Both sucks unless the reviews show otherwise.

        In the 1000 reviews item, it is known to suck with the 1000 reviews. For the 3 reviews item, if the complains are well thought out (like 'this part and this part didn't work and x part breaks if you do this'), then that item sucks too.

        However if the 3 reviews are of poor judgement (like 'I hate this company', 'it didn't work.', 'shipping too slow' ), then there's a 'chance' the 3 reviews item isn't that bad. Only then would there be a 'chance' for the item to b

    • by Anonymous Coward

      Because no company is corrupt enough to pay for 1,000+ sock puppet accounts to post positive reviews for their crap products.

      • by g01d4 ( 888748 )

        sock puppet accounts [posting] positive reviews

        This. Review quality should be added as a weight. One, albeit crude, way to measure the quality of a review is its length. With more effort the quality includes product specificity. And finally there's specificity applied to features. For example, aesthetic based reviews might weigh less than those on functionality. A thousand this-product-is-great single line reviews might mean as much as ten negative reviews that go into a lot more detail.

      • Because no company is corrupt enough...

        Holy shit thanks for the laugh. I needed that today.

        Oh and welcome to planet Earth. Someone will be along shorty to introduce you to this concept we call "Politics."

    • by tomhath ( 637240 )
      FTFA:

      “Around 20 [and running up to 50] is the optimal number of reviews for a product to have to give consumers the confidence that this product has been tried enough by enough people,” he told Moneyish.

      If there are 1000 reviews and the product is rated low, people will tend to avoid it. But if there are a handful of reviews it's reasonable to assume two things: 1) several people have tried the product, and 2) some had issues and left low reviews. People tend to complain in reviews far more often than they compliment, so a few negative reviews don't scare me. Especially if the complaints are from people who bought a lower priced product and then whined because it wasn't as good as a higher priced alt

  • I view products with few reviews to be a complete unknown in terms of quality. So if there are similar products with many reviews (and a decent rating) I'm going to prefer those, because (in theory) they're more of a known quantity.
  • by Anonymous Coward

    It's insightful and accurately reports work conducted by impartial scientists with deep expertise in the field.

    Five stars

    • we should actually go for the product with few reviews, since there’s more of a chance it’s not really so bad...

      Instructions unclear. Made me buy low quality dildo instead. Would not read again.

      1 star

  • by Anonymous Coward on Sunday August 27, 2017 @02:29PM (#55093875)

    Yelp tried to extort them a few times, so this restaurant gives 50% off on pizza if you give them a 1-star review:

    https://www.yelp.com/biz/botto... [yelp.com]

    http://www.bottobistro.com/REW... [bottobistro.com]

    http://insidescoopsf.sfgate.co... [sfgate.com]

    http://time.com/money/3398188/... [time.com]

  • I find that some of the bad reviews are actually people complaining of stupid stuff like "hard to set up/install/etc.". Worth reading the reviews.

    • Sometimes those are worth listening to. I read a few reviews of a wireless access point that said it was difficult to configure. I assumed that the reviewers were idiots. I have a PhD in computer science, so I was pretty sure I could figure it out. It turns out that the manufacturers had decided that technical terms were confusing and so they'd make up their own terms for every single thing in the configuration interface. It took hours of a combination of trial and error, poking the thing remotely with
  • Nothing New (Score:5, Interesting)

    by boudie2 ( 1134233 ) on Sunday August 27, 2017 @02:58PM (#55093973)
    At one point in the 1970s it was my misfortune to be employed as a Filter Queen vacuum cleaner salesman (regrets, I've had a few). Anyways at the time I was taught that some people "have to be told to buy" something, even if it's from a perfect stranger. They can't make the decision on their own as most people are told what to do their whole lives. That and "Advertising is the best way to sell something, especially if it's no fucking good".
  • by petes_PoV ( 912422 ) on Sunday August 27, 2017 @03:13PM (#55094055)
    Many times I have seen 1 or 2 star reviews that complain about other aspects of the order. Packages arriving late, damage, being cancelled.

    These might not be the proper place to complain about non-product issues, but they happen and they drag down the overall rating of the product, itself. Who knows, maybe some bad reviews are hoping for an offer from the maker to improve their ratings.

    However, I do pay more attention to the bad reviews and the reasons given. If there is a pattern of failures, then I'll avoid a product. And I pay them more attention than I do to the good reviews, which even now often appear to be fake, exaggerated, written by idiots ("I've just received such'n'such, it looks wonderful though I haven't tried to use it yet - here's 5 *'s") or clearly from professional reviewers.

  • Just based on experience products with very few reviews are sometimes fakes. The more reviews something has the less likely it is to be a fake product. At least with many reviews over time you see also see if the manufacturer has been fixing the problems and also what kinds of problems people have and can decide if those problems will matter to you.

  • There is a chance it could be worse too. I've bought things that were useless junk that a few people made relatively good reviews. When I buy stuff from harbor freight I know exactly what I am getting, a throwaway tool that I might use a couple times. People don't always want the best...
  • 1) I read the bad reviews first to see how many of the bad reviews are idiots. For example, I purchased a product that adapted VGA to video. There was a switch on the side of the box for NTSC/PAL. A number of N. American consumers indicated the box didn't work and that the picture was monochrome (black & white). Well, those folks obviously didn't flip the switch from PAL (as shipped) to NTSC because monochrome is a symptom of video format mismatch (simplying a bit to illustrate the point). So, I discoun
    • by murdocj ( 543661 )

      I do exactly the same (although not as systematically). For example, when I buy games from Steam, if the negative reviews are centered around "too much DLC" I pretty much discount that, because I play games pretty slowly, so I don't care that I'm not getting "the full game" in one shot. If on the other hand the negatives complain about stuff I care about, I know to pass, even if the positive reviews are gushing about how wonderful the game is.

  • by Anonymous Coward

    First of all if two items have the same overall score there's no obvious way to break the tie. To assert that "statistically" people ought to buy the one with fewer reviews, because it might be better than rated, ignores the fact that it might just as easily be WORSE than rated. If I had no choice and all other things were truly equal, I'd probably choose the item with more reviews. There's less chance it is an outstanding product, but also less chance that it'll end up being a total waste of money.

  • Why are people buying a product with crummy reviews anyway? Sounds like they don't have any choice, so perhaps they are deciding they'd rather get something that works poorly, than something that might not work at all.

  • It would be nice if an extension could be created to work on eliminating them to provide protection from the subconscious impact. But, they are very thoroughly embedded - especially all of the little star ratings scattered about.

  • I tend to not buy the item with the most reviews, but it's not directly due to the number of reviews, not in the sense of a causal relationship. It's because I check tend to be buying the newer product, and the product with the most reviews is usually the older product.

    But it depends on the product. If we are talking about restaurants, a place with many reviews can be because it's a successful restaurant.
    I mostly tend to go with professional reviewers that have a column in the newspaper.
    Yelp restaurant rev

  • by account_deleted ( 4530225 ) on Sunday August 27, 2017 @06:02PM (#55094513)
    Comment removed based on user account deletion
  • Unless I really like the product. I don't generally take the time to do a good review. If I'm only satisfied with the product, I don't bother with a review. I have other things to do.

    Most of the time, if I take the time to post a review, it's because I'm not particularly happy with my purchase.
  • Most fake reviews seem to be just 1-2 liners.
  • If i HAVE to buy a product and the choice is two 1 star product, i will buy the one which has the more buyer because there is a better chance the company does not go under leaving me with no support. If you cannot get quality, then go for the mass.
  • I was looking at a book on Amazon. One reviewer gave it one star. Their review: "Cover was torn."

The opossum is a very sophisticated animal. It doesn't even get up until 5 or 6 PM.

Working...