人工智能最大的感知危险是什么?
What Are The Biggest Perceived Dangers Of AI?

原始链接: https://www.zerohedge.com/technology/what-are-biggest-perceived-dangers-ai

人工智能 (AI),特别是 OpenAI 的 ChatGPT、Midjourney 和 Anthropic 的拟人化聊天机器人 Claude 等工具,已经彻底改变了生产力和创造力。 然而,这些进步也带来了深度造假、诈骗和滥用等风险。 Deepfake 是一个重大问题; 它们涉及制作虚假视听内容,包括利用政治人物模仿进行选民诈骗或制作未经同意的名人色情内容。 微软最近的一项调查显示,71% 的全球受访者对人工智能辅助诈骗表示担忧,尤其是那些涉及冒充的诈骗。 紧随其后的是深度造假和滥用内容,分别影响 69%。 其次是对人工智能生成的虚假信息(幻觉)和数据隐私的担忧,分别为 62% 和 61%。 总体而言,87% 的参与者承认对人工智能相关问题感到担忧。 由于市场预计人工智能将出现前所未有的增长,到 2024 年将超过 300-5500 亿美元,对这些挑战的无知可能会带来社会危害,特别是在政治等敏感领域。 秋季临近的美国总统选举可能会出现更多人为错误信息的情况,Reality Defender 首席执行官本·科尔曼等专家的证词强调了这一点。 他强调了好处和危险,并指出:“深度假货对民主进程、美国价值观和世界各地的社会构成了无数威胁。” 无论社会选择监管还是拒绝它们,其后果都需要仔细考虑。

相关文章

原文

As with every technological advancement, generative artificial intelligence tools like OpenAI's ChatGPT, image generator Midjourney or Claude, the chatbot created by AI startup Anthropic, are used for productivity and creation as well as increasingly for scams and abuse.

Among this new wave of malicious content, deepfakes are especially noteworthy.

These artificially generated audiovisual content pieces include voter scams via impersonating politicians or the creation of nonconsensual pornographic imagery of celebrities.

However, as Statista's Florian Zandt reports,recent survey by Microsoft shows, fakes, scams and abuse are what online users worldwide are most worried about.

71 percent of respondents across 17 countries surveyed by Microsoft in July and August 2023 were very or somewhat worried about AI-assisted scams.

Infographic: What Are the Biggest Perceived Dangers of AI? | Statista

You will find more infographics at Statista

Without further clarifying what constitutes this kind of scam, it is most likely connected to the impersonation of a person in the public eye, a government official or a close acquaintance of the respondents.

Tying for second are deepfakes and sexual or online abuse with 69 percent.

AI hallucinations, which are defined as chatbots like ChatGPT presenting nonsensical answers as facts due to issues with the training material, come in fourth, while data privacy concerns, which are related to large language models being trained on publicly available data of users without their explicit consent, takes fifth place with 62 percent. Overall, 87 percent of respondents were worried at least somewhat about one problematic AI scenario.

Despite the huge market for artificial intelligence - estimated to be between $300 and $550 billion in 2024 by various sources - the survey results indicate that ignoring its potential pitfalls and dangers could prove detrimental to society at large. This is especially true in sensitive areas like politics. With the U.S. presidential elections looming in the fall, the social media landscape is bound to be rife with artificially generated mis- and disinformation.

At a recent hearing on the topic of deepfakes and AI used in election cycles, the CEO of deepfake detection company Reality Defender, Ben Colman, praised some aspects of generative AI while highlighting its dangers as well:

"I cannot sit here and list every single malicious and dangerous use of deepfakes that has been unleashed on Americans, nor can I name the many ways in which they can negatively impact the world and erode major facets of society and democracy", said Colman.

"I am, after all, only allotted five minutes. What I can do is sound the alarm on the impacts deepfakes can have not just on democracy, but America as a whole."

Will this new path for mis-, dis-, mal-information simply become 'regulated' to fit with the government's definitions of 'truth'... "for our own good?"

联系我们 contact @ memedata.com