eCommerceNews US - Technology news for digital commerce decision-makers
Worried online shopper checking laptop reviews inauthentic feedback

Bazaarvoice finds AI review use fuels shopper mistrust

Tue, 24th Mar 2026

Bazaarvoice has published research showing that 23% of consumers use artificial intelligence tools to write product reviews, while 64% still view AI-assisted reviews as inauthentic.

The figures highlight a tension in online shopping behaviour. AI tools are becoming part of the review-writing process, yet many consumers remain uneasy about whether those reviews still reflect real customer experience.

Only 16% of respondents said they were very confident they could tell the difference between an AI-written review and one written by a person. The gap between scepticism and detection suggests many shoppers are wary of content they may not be able to identify in practice.

The research also indicates that most people using AI for reviews are not asking it to create opinions from scratch. Among those using third-party generative AI tools such as ChatGPT, Gemini or Perplexity, 83% said they write the full review themselves first and then use AI to refine grammar and tone.

Another 53% said they provide their own notes and bullet points to guide the tool. Nearly half, 47%, use on-site suggestion tools embedded in review platforms to improve the wording of their submissions.

Trust Questions

The findings suggest the debate is less about whether AI is involved and more about how much it shapes the final text. Consumers surveyed by Bazaarvoice drew a distinction between using software to polish wording and using it to produce a review that may not reflect direct experience.

That concern was also evident among respondents who had used AI assistance themselves. Of that group, 48% said AI-generated outputs can feel robotic, 44% said the tools erase their voice, and 35% worried AI could introduce inaccurate product details.

Doug Straton, chief marketing officer at Bazaarvoice, said authenticity remains central to how shoppers judge review content. "AI is rapidly becoming part of the shopping journey, but when it comes to reviews, authenticity still matters above all else," he said.

"Consumers may use AI as a tool to help refine their thoughts, but they ultimately trust reviews that reflect real experiences and real voices. Our Content Coach feature is the perfect example of using AI to help generate reviews authentically. It doesn't write anything for the shopper - it simply uses AI to suggest unbiased topic ideas tailored to the specific product category for the reviewer to write about, preserving review authenticity while increasing the length and richness of reviews to fuel future customer purchases."

Consumer Scepticism

The data adds to a wider debate over the role of generative AI in commerce and online discovery. Product reviews have long been a critical tool for retailers and shoppers, but trust depends heavily on the sense that they come from genuine users and reflect actual experiences.

As more consumers use AI tools to edit text, the line between assistance and authorship may become harder to define. For platforms that host reviews, that creates a challenge in deciding what should be disclosed, what should be moderated, and what still counts as authentic user-generated content.

Alex Kirk, director of insights at Bazaarvoice, said the same issue affects confidence in AI-led product recommendations.

"Whether it comes to artificial intelligence usage in review writing or LLM suggested product recommendations, our research has shown time and time again that authenticity is paramount," he said.

"In previous research we've found that consumers are more likely to trust product recommendations provided from generative engines when they know that authentic ratings and reviews are sourcing the suggestions, so it makes sense that they want reviews to be written based on real human experience. Consumers can use AI to improve the grammar and clarity of their reviews, but they should always make sure they're getting their honest thoughts and opinions across."

Platform Response

Outright bans on AI use in reviews would be difficult to enforce, partly because many devices and platforms already include spelling and grammar assistance. That raises practical questions for review sites trying to distinguish between acceptable editing help and misleading synthetic content.

Bazaarvoice said it relies on moderation and verification measures, including a visual marker attached to reviews that meet its internal authenticity standards. It argues that disclosure and review controls will become more important as AI writing tools spread.

Straton said the company does not regard all AI-assisted reviews as deceptive. "AI is here to stay, and its adoption is only accelerating; banning it entirely from review content is not only extremely difficult, it's unnecessary - we'd have to ban the use of spelling and grammar checks built into our device operating systems while review writing, afterall," he said.

"A review written with AI assistance is not inherently fraudulent. There is a world of difference between a customer using AI to articulate their genuine experience and a bad actor using it to mass-produce fake reviews for products they've never touched. To ensure that AI and authenticity can coexist, we're continuing to create digital guardrails such as our Intelligent Trustmark, a visual symbol we display on reviews that have passed Bazaarvoice's rigorous authenticity standards, and are proven to be real and verified. Proper disclosure and unbiased moderation processes will be key as this space evolves."

The research was based on a two-phase study. The first stage surveyed more than 1,300 adults, and the second targeted participants who said they would be unlikely to buy a product if they knew AI had been used to assist the review-writing process.