研究表明,在产品上贴上“AI”标签可能会赶走人们
Study shows that tacking the “AI” label on products may drive people away

原始链接: https://www.cnn.com/2024/08/10/business/brands-avoid-term-customers/index.html

一项研究表明,将产品标记为使用人工智能(AI)可能会阻止客户购买该产品。 研究人员对不同年龄段的个人进行了调查,并将标记为“高科技”的产品与标记为利用人工智能的产品进行了比较。 在所有情况下,当描述中出现人工智能时,购买或使用该物品的意愿都会显着下降。 研究人员检查了家用电器等低风险项目和自动驾驶汽车、基于人工智能的投资建议和诊断系统等高风险项目。 尽管更多的人拒绝购买高风险商品,但这两个类别的大部分人都没有购买。 两种类型的信任会影响消费者对人工智能标签商品的负面看法。 认知信任是指由于人们对人工智能作为无差错机器的期望而提高的标准。 当错误发生时,这种信任就会迅速下降。 例如,谷歌今年年初因其人工智能驱动的搜索结果摘要工具提供的不正确和误导性信息而受到批评。 由于人们对人工智能的理解和情感有限,情感信任发挥作用,导致人们对技术形成主观评价。 此外,对人工智能处理个人数据的担忧也会导致兴趣和信任度下降。 这些问题缺乏透明度,给品牌声誉带来潜在风险。 企业应避免使用“人工智能驱动”等模糊术语来宣传其产品,而应解释其好处并有效缓解消费者的恐惧。

人工智能(AI)目前被过度炒作,其对企业的影响并未达到预期。 尽管谷歌和 Facebook 等公司声称,由于人工智能的实施,他们的广告收入有所提高,但效果却微乎其微。 公司急于将人工智能融入到他们的产品中,而没有充分了解其潜在的好处和局限性,从而导致执行力差和表现不佳。 许多企业将人工智能视为一种快速获利的工具,而不是长期战略,因此忽视了适当的测试、研究和开发。 急于利用人工智能导致了关于进步的广泛谎言和监管不足,引起了消费者和专业人士的不信任。 此外,人工智能可能不会为老牌公司提供显着的回报,当前行业内的整合阶段就证明了这一点。 过去,由于自动化和机器人技术,类似的情况也发生在制造业,导致工人大量失业。 同样,人工智能可能会取代白领工作,使工人容易失业和流离失所。 总的来说,作者认为人工智能潜在的积极影响被企业的贪婪、贪婪的政客和资源的滥用所掩盖,最终导致社会和社区的衰落。
相关文章

原文

CNN  — 

Even as tech giants pour billions of dollars into what they herald as humanity’s new frontier, a recent study shows that tacking the “AI” label on products may actually drive people away.

A study published in the Journal of Hospitality Marketing & Management in June found that describing a product as using AI lowers a customer’s intention to buy it. Researchers sampled participants across various age groups and showed them the same products – the only difference between them: one was described as “high tech” and the other as using AI, or artificial intelligence.

“We looked at vacuum cleaners, TVs, consumer services, health services,” said Dogan Gursoy, one of the study’s authors and the Taco Bell Distinguished Professor of hospitality business management at Washington State University, in an interview with CNN. “In every single case, the intention to buy or use the product or service was significantly lower whenever we mentioned AI in the product description.”

Despite AI’s rapid advancement in recent months, the study highlights consumers’ hesitance to incorporate AI into their daily lives – a marked divergence from the enthusiasm driving innovations in big tech.

Included in the study was an examination of how participants viewed products considered “low risk,” which included household appliances that use AI, and “high risk,” which included self-driving cars, AI-powered investment decision-making services and medical diagnosis services.

While the percentage of people rejecting the items was greater in the high-risk group, non-buyers were the majority in both product groups.

There are two kinds of trust that the study says play a part in consumers’ less-than-rosy perception of products that describe themselves as “AI-powered.”

The first kind, cognitive trust, has to do with the higher standard that people hold AI to as a machine they expect to be free from human error. So, when AI does slip up, that trust can be quickly eroded.

Take Google’s AI-generated search results overview tool, which summarizes search results for users and presents them at the top of the page. People were quick to criticize the company earlier this year for providing confusing and even blatantly false information to users’ questions, pressuring Google to walk back some of the features’ capabilities.

Gursoy says that limited knowledge and understanding about the inner workings of AI forces consumers to fall back on emotional trust and make their own subjective judgments about the technology.

“One of the reasons why people are not willing to use AI devices or technologies is fear of the unknown,” he said. “Before ChatGPT was introduced, not many people had any idea about AI, but AI has been running in the background for years and it’s nothing new.”

Even before chatbot ChatGPT burst into public consciousness in 2022, artificial intelligence was used in technology behind familiar digital services, from your phone’s autocorrect to Netflix’s algorithm for recommending movies.

And the way AI is portrayed in pop culture isn’t helping boost trust in the technology either. Gursoy added that Hollywood science fiction films casting robots as villains had a bigger impact on shaping public perception towards AI than one might think.

“Way before people even heard about AI, those movies shaped people’s perception of what robots that run by AI can do to humanity,” he said.

Another part of the equation influencing customers is the perceived risk around AI – particularly with how it handles users’ personal data.

Concerns about how companies manage customers’ data have tamped down excitement around tools meant to streamline the user experience at a time when the government is still trying to find its footing on regulating AI.

“People have worries about privacy. They don’t know what’s going on in the background, the algorithms, how they run, that raises some concern,” said Gursoy.

This lack of transparency is something that Gursoy warns has the potential to sour customers’ perceptions towards brands they may have already come to trust. It is for this reason that he cautions companies against slapping on the “AI” tag as a buzzword without elaborating on its capabilities.

“The most advisable thing for them to do is come up with the right messaging,” he said. “Rather than simply putting ’AI-powered’ or ’run by AI,’ telling people how this can help them will ease the consumer’s fears.”

联系我们 contact @ memedata.com