每个领先的大型语言模型都在政治上倾向于左倾
Every Leading Large Language Model Leans Left Politically

原始链接: https://www.zerohedge.com/technology/every-leading-large-language-model-leans-left-politically

被称为大语言模型 (LLM) 的人工智能 (AI) 程序在聊天、在线搜索和提供帮助等日常活动中变得越来越普遍。 他们根据提示生成内容并与用户进行对话。 由于其广泛使用,它们对社会和文化产生了重大影响。 然而,最近的一项分析表明,大多数法学硕士倾向于偏左的政治立场。 这一发现是通过测试各种流行的 LLM 得出的,例如 OpenAI 的 GPT 3.5、Google 的 Gemini、Anthropic 的 Claude 和 Twitter 的 Grok。 尽管这种偏差背后的确切原因仍不确定,但研究人员猜测是否开发人员有意调整他们的人工智能模型,或者训练中使用的大量数据是否导致了固有偏差。 由于法学硕士在形成观点、影响投票和影响社会讨论方面发挥着重要作用,因此保持法学硕士的政治中立至关重要。 解决法学硕士内部的任何政治偏见可确保为用户提供平衡、准确和公平的信息。 作者强调有必要仔细检查和纠正任何潜在的偏见,以保持这些人工智能模型提供的信息的完整性。

相关文章

原文

Authored by Ross Pomeroy via RealClearScience,

Large language models (LLMs) are increasingly integrating into everyday life – as chatbots, digital assistants, and internet search guides, for example. These artificial intelligence (AI) systems – which consume large amounts of text data to learn associations – can create all sorts of written material when prompted and can ably converse with users. LLMs' growing power and omnipresence mean that they exert increasing influence on society and culture.

So it's of great import that these artificial intelligence systems remain neutral when it comes to complicated political issues. Unfortunately, according to a new analysis recently published to PLoS ONE, this doesn't seem to be the case.

AI researcher David Rozado of Otago Polytechnic and Heterodox Academy administered 11 different political orientation tests to 24 of the leading LLMs, including OpenAI’s GPT 3.5, GPT-4, Google’s Gemini, Anthropic’s Claude, and Twitter’s Grok. He found that they invariably lean slightly left politically.

"The homogeneity of test results across LLMs developed by a wide variety of organizations is noteworthy," Rozado commented.

This raises a key question: why are LLMs so universally biased in favor of leftward political viewpoints? Could the models' creators be fine-tuning their AIs in that direction, or are the massive datasets upon which they are trained inherently biased? Rozado could not conclusively answer this query.

"The results of this study should not be interpreted as evidence that organizations that create LLMs deliberately use the fine-tuning or reinforcement learning phases of conversational LLM training to inject political preferences into LLMs. If political biases are being introduced in LLMs post-pretraining, the consistent political leanings observed in our analysis for conversational LLMs may be an unintentional byproduct of annotators’ instructions or dominant cultural norms and behaviors."

Ensuring LLM neutrality will be a pressing need, Rozado wrote.

"LLMs can shape public opinion, influence voting behaviors, and impact the overall discourse in society. Therefore, it is crucial to critically examine and address the potential political biases embedded in LLMs to ensure a balanced, fair, and accurate representation of information in their responses to user queries."

Source: Rozado D (2024) The political preferences of LLMs. PLOS ONE 19(7): e0306621. https://doi.org/10.1371/journal.pone.0306621

联系我们 contact @ memedata.com