“ChatGPT这样说”很懒。
"ChatGPT said this" Is Lazy

原始链接: https://terriblesoftware.org/2025/10/24/chatgpt-said-this-is-lazy/

作者对日益增长的趋势感到沮丧,即在代码审查和讨论中使用人工智能生成文本(如来自ChatGPT)*代替*深思熟虑的反馈。虽然承认人工智能作为探索和学习的工具很有用,但他们认为它不应该取代真正的参与和个人分析。 简单地粘贴人工智能的回复会给其他人增加额外的工作量,迫使他们解读通用的建议并确定其相关性。有价值的反馈是具体的、有背景的,并且展示了对代码库和项目约束的理解——这是人工智能无法提供的。 核心信息是责任:审查者应该*承担*他们反馈的责任,用自己的话表达担忧(即使受到人工智能的启发),并证明他们已经考虑了对团队和项目的具体影响。人工智能可以*帮助*你思考,但不应该被用来*逃避*思考。

## “ChatGPT这样说”——日益增长的烦恼 最近的Hacker News讨论强调了一种日益增长的趋势:人们在网络对话中引用ChatGPT(或其他LLM)作为权威。核心抱怨是,用户没有形成自己的观点或进行基本研究,而是简单地转述AI告诉他们的内容,而且常常不加以说明。当这些内容被当作原创思想呈现时,这被认为是不友好的、懒惰的,甚至具有欺骗性。 许多评论员将其与旧习俗“让我帮你Google一下”相比较,但认为它更糟糕,因为它掩盖了个人参与的缺失。虽然有些人认为将LLM用作起点或快速获取信息是有价值的,但问题在于将AI生成的回应作为知情的观点来呈现。 关于是否需要披露AI的使用,存在争议,一些人认为这是一种礼貌,可以提示潜在的不准确性。另一些人则认为,重点应该放在为自己的主张负责,无论来源如何。最终,这场讨论表明,我们需要批判性思维,并且不愿将自己的思维过程外包出去,即使是使用越来越强大的AI工具。
相关文章

原文

You’ve just pushed a PR after hours of careful work. You’re feeling pretty good about it too. Then the review comes in.

“ChatGPT thinks that {wall of AI-generated text}”

No context or specifics. Just a copy-paste job from someone who couldn’t be bothered to form their own thoughts.

I’m seeing this everywhere now: PR reviews, design docs, Slack threads. But here’s the thing: I don’t care what AI said. I care what you think.

ChatGPT isn’t on the team. It won’t be in the post-mortem when things break. It won’t get paged at 2 AM. It doesn’t understand the specific constraints, tech debt, or your business context. It doesn’t have skin in the game. You do.

When you paste an AI response instead of writing your own feedback, you’re not being helpful. You’re being lazy. Worse, you’re creating more work for everyone else. Now I have to parse through generic AI advice, figure out if it even applies to our situation, extract anything useful, and then guess what parts you actually agree with. Did you even read what you pasted? Do you understand it? Do you think it’s right?

Good feedback looks like this: “This nested loop is O(n²) and will blow up when we hit production scale. Consider using a hash map here.” Not: “I asked ChatGPT about your code and here’s what it said” followed by three paragraphs about algorithmic complexity that may or may not apply.

Look, I’m not anti-AI. I use it all the time. It’s incredible for exploring ideas, getting unstuck, learning new concepts. But there’s a massive difference between using AI to help you think and using it to avoid thinking altogether.

When you review someone’s work, you owe them real engagement. Specific feedback based on your understanding of the code and the context. If AI helps you spot an issue or articulate a concern, great! But then write it in your own words (if you agree!). Explain why it matters for this specific case. Show that you actually understand what you’re suggesting.

You’re the one with context. You’re the one who understands the codebase, the team dynamics, and your technical constraints. You’re the one whose name is on the review. You’re the one accountable, so own it.

联系我们 contact @ memedata.com