亚马逊上出售的关于多动症的AI创作书籍:“危险的胡说八道”
'Dangerous nonsense': AI-authored books about ADHD for sale on Amazon

原始链接: https://www.theguardian.com/technology/2025/may/04/dangerous-nonsense-ai-authored-books-about-adhd-for-sale-on-amazon

亚马逊因允许AI生成的书籍,特别是关于多动症等敏感话题的书籍充斥其市场而受到批评。这些书籍易于制作,成本低廉,往往包含错误信息和潜在的有害建议。专家警告说,在线市场的“蛮荒西部”缺乏监管,导致这种情况盛行,而亚马逊的盈利模式则助长了这种现象。AI检测工具已证实许多多动症指南很可能由聊天机器人撰写,缺乏关键的专业知识和批判性分析。这给易受伤害的读者带来了误诊和病情恶化的风险。虽然亚马逊声称拥有内容指南并删除不符合规定的内容,但批评人士认为,该公司有责任保护客户免受潜在有害的AI生成内容的侵害,尤其是在AI绕过了传统的出版保障措施的情况下。一位顾客理查德·沃兹沃斯发现亚马逊上的一本多动症指南充斥着不准确的信息和有害建议,突显了这些书籍可能造成的潜在损害。

Hacker News 上的一个帖子讨论了一篇关于亚马逊上销售的、由 AI 创作的关于 ADHD(注意力缺陷多动障碍)的书籍的文章,这些书籍被贴上了“危险的胡说八道”的标签。评论者们就文章的论点展开了辩论,质疑了这些书籍是由 AI 生成这一说法提供的证据,并指出其中一本书中一些被认为“冒犯性”的关于 ADHD 的陈述实际上是基于真实的研究所得出的。 讨论进一步扩展到由于创作和分发成本降低而导致的低质量内容泛滥的普遍问题,这个问题被 AI 放大了。人们对“人工智能生成的貌似合理的愚蠢内容”的兴起及其对在线讨论和商业的影响表示担忧。垃圾邮件和自动化内容生成的作恶作用被强调。一些人认为需要更好的过滤机制,例如亚马逊等平台的人工审核,以应对 AI 生成内容的涌入。

原文

Amazon is selling books marketed at people seeking techniques to manage their ADHD that claim to offer expert advice yet appear to be authored by a chatbot such as ChatGPT.

Amazon’s marketplace has been deluged with works produced by artificial intelligence that are easy and cheap to publish but include unhelpful or dangerous misinformation, such as shoddy travel guidebooks and mushroom foraging books that encourage risky tasting.

A number of books have appeared on the online retailer’s site offering guides to ADHD that also seem to be written by chatbots. The titles include Navigating ADHD in Men: Thriving with a Late Diagnosis, Men with Adult ADHD: Highly Effective Techniques for Mastering Focus, Time Management and Overcoming Anxiety and Men with Adult ADHD Diet & Fitness.

Samples from eight books were examined for the Guardian by Originality.ai, a US company that detects content produced by artificial intelligence. The company said each had a rating of 100% on its AI detection score, meaning its systems were highly confident that the books were written by a chatbot.

Experts said online marketplaces were a “wild west” owing to the lack of regulation around AI-produced work – and dangerous misinformation risked spreading as a result.

Michael Cook, a computer science researcher at King’s College London, said generative AI systems were known to give dangerous advice, for example around ingesting toxic substances, mixing together dangerous chemicals or ignoring health guidelines.

As such, it was “frustrating and depressing to see AI-authored books increasingly popping up on digital marketplaces” particularly on health and medical topics, which could result in misdiagnosis or worsen conditions, he said.

“Generative AI systems like ChatGPT may have been trained on a lot of medical textbooks and articles, but they’ve also been trained on pseudoscience, conspiracy theories and fiction,” said Cook.

“They also can’t be relied on to critically analyse or reliably reproduce the knowledge they’ve previously read – it’s not as simple as having the AI ‘remember’ things that they’ve seen in their training data. Generative AI systems should not be allowed to deal with sensitive or dangerous topics without the oversight of an expert,” he added.

Yet Cook noted Amazon’s business model incentivised this type of practice, as it made “money every time” people bought a book, whether the work was “trustworthy or not”, while the generative AI companies that created the products were not held accountable.

Prof Shannon Vallor, the director of the University of Edinburgh’s Centre for Technomoral Futures, said Amazon had “an ethical responsibility to not knowingly facilitate harm to their customers and to society”, although it would be “absurd” to make a bookseller responsible for the contents of all its books.

Problems were arising because the guardrails previously deployed in the publishing industry – such as reputational concerns and the vetting of authors and manuscripts – had been completely transformed by AI, she noted.

This was compounded by a “wild west” regulatory environment in which there were no “meaningful consequences for those who enable harms”, fuelling a “race to the bottom”, Vallor said.

At present, there is no legislation that requires AI-authored books to be labelled as such. Copyright law only applies if a specific author’s content has been reproduced, although Vallor noted that tort law should impose “basic duties of care and due diligence”.

The Advertising Standards Agency said AI-authored books cannot be advertised to give a misleading impression that they were written by a human, enabling people who had seen such books to submit a complaint.

Richard Wordsworth was hoping to learn about his recent adult ADHD diagnosis when his father recommended a book he found on Amazon after searching “ADHD adult men”.

When Wordsworth sat down to read it, “immediately, it sounded strange”, he said. The book opened with a quote from the conservative psychologist Jordan Petersen and then contained a string of random anecdotes, as well as historical inaccuracies.

Some advice was actively harmful, Wordsworth observed. For example, one chapter discussing emotional dysregulation warned that friends and family did not “forgive the emotional damage you inflict. The pain and hurt caused by impulsive anger leave lasting scars.”

When Wordsworth researched the author he spotted a headshot that looked AI-generated, plus a lack of qualifications. He searched several other titles in the Amazon marketplace and was shocked to encounter warnings that his condition was “catastrophic” and that he was “four times more likely to die significantly earlier”.

He felt immediately “upset”, as did his father, who is highly educated. “If he can be taken in by this type of book, anyone could be – and so well-meaning and desperate people have their heads filled with dangerous nonsense by profiteering scam artists while Amazon takes its cut,” Wordsworth said.

An Amazon spokesperson said: “We have content guidelines governing which books can be listed for sale and we have proactive and reactive methods that help us detect content that violates our guidelines, whether AI-generated or not. We invest significant time and resources to ensure our guidelines are followed and remove books that do not adhere to those guidelines.

“We continue to enhance our protections against non-compliant content and our process and guidelines will keep evolving as we see changes in publishing.”

联系我们 contact @ memedata.com