How to handle fraudulent reviews on online portals?

Consumers who use the Internet to learn about products are increasingly looking at online reviews to make purchasing decisions. The growing interest in online product reviews for legitimate promotion has been accompanied by a rise in fraudulent reviews; these are reviews placed by firms that artificially inflate ratings of their own products, or reviews by firms or third parties that give lower ratings to competitors' products. A new study sought to determine how consumers respond to potentially fraudulent reviews and how review portals (e.g., Amazon, Expedia, TripAdvisor, Yelp) can leverage this information to design better fraud-management policies and increase consumers' trust. It found that portals that include fraudulent reviews are more likely to boost buyers' trust.

The study, by researchers at Carnegie Mellon University (CMU) and the University of Washington, is published in Information Systems Research.

"Consumers rely on the content of online reviews to make decisions about purchases, and about 15 to 30 percent of all online reviews are estimated to be fraudulent," explains Beibei Li, professor of information systems and management at CMU's Heinz College, who led the study. "But beyond creating algorithms that detect the initial fraud, researchers have not fully explored what review portals do once fraudulent reviews are detected."

Indeed, there is no consensus among firms regarding what to do with these types of reviews. Some review portals delete fraudulent reviews, others publicly acknowledge censoring fake reviews and sue firms suspected of posting them, and still others make the fraudulent reviews visible to the public with a notation that they may be fraudulent.

In this study, researchers sought to determine how review portals should display fraudulent information to increase consumers' trust in the platform. Specifically, they engaged in three exercises based on an experimental restaurant review portal they designed and implemented; a reservation system that used real data; and a behavior tracking system that determined amount of time consumers spent on each page, number of clicks, and number of restaurant pages visited. They also identified which restaurants were chosen by which consumers.

The study found that consumers tended to trust the information provided by platforms the most when the portal displayed fraudulent reviews along with nonfraudulent reviews, as opposed to the more common practice of censoring suspected fraudulent reviews. The impact of fraudulent reviews on consumers' decision-making process increased with their uncertainty about the initial evaluation of product quality: When consumers were very uncertain about a product, they treated fraudulent reviews as an important supplemental source of information for decision making.

The study also found that consumers weren't influenced by the content of fraudulent reviews: When they chose to use this information, they couldn't distinguish between different types of fraudulent information (e.g., malicious negative reviews or self-promotional positive reviews). This suggests that firms would benefit by using a method that incorporates the motivational differences between positive and negative fraudulent reviews to help consumers make decisions.

The researchers say their findings have practical implications for managers of review portals or platforms who wish to boost consumers' trust:

  • Platforms can increase consumers' trust by leaving potentially fraudulent reviews on their site with comments instead of censoring these reviews without comment.

  • Potentially fraudulent reviews are best displayed when managers use a decision method that decreases the burden to consumers.

  • Any decrease in trust a platform may face from admitting to users that there is fraud on its site is balanced by an increase in trust from consumers who already thought there was fraud and now see that something is being done to address it.

"Our study advances understanding of how consumers respond to fraudulent information online and furthers the state-of-the-art practice in the industry for handling fraudulent reviews," explains Michael D. Smith, professor of information technology and marketing at CMU's Heinz College, who coauthored the study. "It also can inform regulatory and policy discussions about the widespread incidence of fake information disseminated online."

The authors acknowledge limitations to their study: In their work, they used Amazon Mechanical Turk--a crowdsourcing marketplace that helps people and businesses outsource their work to a workforce that can do the tasks virtually--instead of observing actual consumers. And they analyzed only settings in which reviewers had no prior knowledge about the portal they were using.

Previous
Previous

Envy divides society- including job envy

Next
Next

New site for expert behavior tips- BehaviorBits.com