How Many Bad Customer Reviews Is Too Many? One
February 10, 2020903 views0 comments
By Paulo Albuquerque
Sellers should focus less on cultivating a forest of positive reviews, and more on the state of each tree.
Online shopping has worked huge changes in consumer habits, including how people decide what to buy. As consumer product reviews have proliferated and become radically more accessible, they have emerged as a strong potential counterweight to official marketing campaigns. The rise of e-commerce platforms is largely responsible. Browsing Amazon’s digital aisles, one routinely encounters dozens, if not hundreds, of customer reviews for each product at the point of sale – occupying the influential position reserved for salaried salespeople in conventional brick-and-mortar stores.
In response, some online vendors and companies have resorted to deceptive practices such as “brushing” (i.e. fabricating transactions for the sake of generating positive reviews) or bribing consumers to post gushing reviews of their product. However, these tactics may not be worth the serious ethical risk and nominal costs they incur, according to our ongoing research.
Our study finds that even when the totality of peer opinion about a given product is strongly favourable, a single negative review can have an outsized impact on sales. Therefore, companies might want to focus less on cultivating a forest of great reviews, and more on the state of each tree the customer may see.
Isolating the impact of a single review
Plenty of researchers before us have explored various connections between consumer sentiment and sales. Isolating the impact of individual reviews, however, required us to disentangle their positivity or negativity from any other factors within the observation period that may have affected sales – such as inventory issues or warranty disputes with manufacturers, which could come to customers’ attention in several ways.
To illustrate how to solve this problem, we make use of click-stream data over a three-month period from a large online retailer based in the United Kingdom. Our data set comprised more than 31,000 discrete products and more than 600,000 reviews. We compared the purchase and browsing activity of customers who saw a negative review (i.e. three out of five stars or less) to that of customers who visited the same product page but did not see the review (either because they did not scroll down far enough or because they did not click over to the next page of reviews).
There were distinct differences between these two groups. On average, adding one negative review to an otherwise “clean” product page decreased probability of purchase by 50 percent over the different product categories on average, among consumers who read any reviews at all. Moving the negative review onto the second page, meaning customers had to perform an extra click to view it, boosted the likelihood of purchase by about 40 percent.
Thanks to our click-stream data, we could also calculate the average effect of exposure to a single negative review on subsequent browsing activity. The number of consumers who decided to continue shopping for alternative products rather than clicking “Buy Now” rose by about 10 percent with a bad review in sight. In other words, with no low rating to scare them off, consumers were around 10 percent more likely to end their search on the focal page – either by buying the product in question or concluding their shopping session for the time being. After being spooked by a bad review, customers who ultimately decided to buy a competing product ended up paying at least 20 percent more, on average. Think of this as the premium people are willing to pay for the peace of mind that only a sea of uniformly positive reviews can produce.
To summarise these effects, we determined that the average product in our study had a sales elasticity (i.e. the impact of a single negative review on sales) of -18 percent, and a search elasticity (i.e. the likelihood that a bad review would spur further search) of 4 percent.
Product comparisons
In an online shopping context, however, averages can only tell you so much. Sales and search elasticity vary greatly from product to product. For example, potential buyers will shop much more carefully for computers than for, say, curtains. We entrust many core functions of our lives to our laptops, whereas curtains mostly have one job: to shut out the light. Consequently, online shoppers place more stock in minority opinions about the former than the latter.
Indeed, we found that even within the electronics sector, there was a high degree of variance in elasticity. Printer ink cartridges were among the least susceptible to outlier negative reviews, with elasticity levels of less than -10 percent, on average. On the other end of the spectrum, the elasticity of tablets, televisions and printers was close to -20 percent in many cases. In addition, the elasticity of some headphones and telephones even exceeded -40 percent.
Managers can use the above information to get a feel for how vulnerable their own products might be. In one sense, though, elasticity can be misleading. Highly elastic products, i.e. those for which every review counts a great deal, may also generate a greater volume of reviews, resulting in heavy churn on product pages. That means that the rare negative reviews will sink out of sight faster as new ones pour in. By contrast, each bad review for a relatively inelastic product tends to have a weaker, but longer-lasting effect. For example, the median duration of a bad review on laptop product pages was two weeks; for printer ink cartridges, it was 56 days. For curtains, the negative review lingered for 166 days.
Eschew fake reviews
Because the per-review effect is tied to its visibility on a product page, companies might be enticed to purchase fake positive reviews to push the negative outliers below the threshold of consumer awareness. However, the number of positive reviews needed to accomplish this makes the sketchy tactic untenable in most cases. According to our calculations, a mere five percent or thereabouts of products are in high enough demand to justify the financial outlay.
Instead, managers would do well to redouble their efforts to please customers who register their displeasure through negative reviews. The ideal endgame would be to resolve the issue that caused the negative review – and, then and only then, ever-so-politely request that the reviewer revise it.
Paulo Albuquerque is an Associate Professor of Marketing at INSEAD.