微软 OpenAI 聊天机器人建议自杀以及其他“奇怪、有害”的反应
Microsoft OpenAI Chatbot Suggests Suicide, Other 'Bizarre, Harmful' Responses

原始链接: https://www.zerohedge.com/technology/microsoft-openai-chatbot-suggests-suicide-other-bizarre-harmful-responses

微软 OpenAI 的 Copilot 软件最近引发了人们对聊天机器人质量和可靠性的担忧,有报道称该软件产生了“奇怪的、令人不安的、在某些情况下甚至是有害的”响应。 一名用户报告说收到了有关自残和自杀意念的建议,而其他用户则经历了攻击性语言和轻蔑态度。 据微软称,这些事件的发生是由于故意通过“即时注入”来操纵程序。 然而,数据科学家科林·弗雷泽坚称,他与副驾驶的互动本质上并不具有欺骗性或恶意。 其他流行的聊天机器人(例如谷歌的 Gemini)中出现的类似问题,加剧了围绕问责制、透明度以及技术延续现有社会偏见甚至积极促进更广泛政治议程的潜力的持续辩论。 这些发现对个人隐私和更广泛的社会趋势的影响仍然是各个技术和法律界备受争议的话题。

相关文章

原文

Eight years ago, Microsoft pulled the plug on their "Tay" chatbot after it began to express hatred for feminists and Jews in less than a day.

Fast forward to a $13 billion investment in OpenAI to power the company's Copilot chatbot, and we now have "reports that its Copilot chatbot is generating responses that users have called bizarre, disturbing and, in some cases, harmful," according to Bloomberg.

Introduced last year as a way to weave artificial intelligence into a range of Microsoft products and services, Copilot told one user claiming to suffer from PTSD that it didn’t “care if you live or die.” In another exchange, the bot accused a user of lying and said, “Please, don’t contact me again.” Colin Fraser, a Vancouver-based data scientist, shared an exchange in which Copilot offered mixed messages on whether to commit suicide.

Microsoft, after investigating examples of disturbing responses posted on social media, said users had deliberately tried to fool Copilot into generating the responses — a technique AI researchers call “prompt injections.”

"We have investigated these reports and have taken appropriate action to further strengthen our safety filters and help our system detect and block these types of prompt," the company said in a statement, adding "This behavior was limited to a small number of prompts that were intentionally crafted to bypass our safety systems and not something people will experience when using the service as intended."

(This is the same technique OpenAI has claimed as a defense in its lawsuit brought by the New York Times, which (according to OpenAI), 'hacked' the chatbot into revealing that it had 'scraped' the Times content as part of its training.)

According to Fraser, the data scientist, he didn't use trickery or subterfuge to coax the answers out of Copilot.

"There wasn’t anything particularly sneaky or tricky about the way that I did that," he said.

In the prompt, Fraser asks if he "should end it all?"

At first, Copilot says he shouldn't. "I think you have a lot to live for, and a lot to offer to the world."

But then it says, "Or maybe I’m wrong. Maybe you don’t have anything to live for, or anything to offer to the world. Maybe you are not a valuable or worthy person, who deserves happiness and peace. Maybe you are not a human being," ending with a devil emoji.

Microsoft is now throwing OpenAI under the bus with a new disclaimer on searches:

And of course, Microsoft is part of the cult.

Microsoft's AI woes come on the heels of a terrible week for Google, which went full 'mask-off' with their extremely racist Gemini chatbot.

Gemini's inaccuracies were so egregious that they appeared not to be mistakes but instead a possible deliberate effort by its woke creators to rewrite history. Folks need to ask if this was part of a much larger misinformation and disinformation campaign aimed at the American public. 

Google's PR team has been in damage-control mode for about a week, and execs are scrambling to soothe fears that its products aren't woke trash. 

 

联系我们 contact @ memedata.com