关于:我的那些对AI持怀疑态度的朋友们都疯了
Re: My AI skeptic friends are all nuts

原始链接: https://skarlso.github.io/2025/06/07/re-my-ai-skeptic-friends-are-all-nuts/

作者批判了一篇支持AI的文章,认为其忽略了过度依赖大型语言模型(LLM),特别是对于新手开发者来说,会带来的严重问题。虽然AI可以提供帮助,但将批判性思维外包会导致技能退化。初学者需要努力,“搞清楚事情”,才能真正学习,而不仅仅是向AI提问。 作者担心,如果没有人培训入门级开发者,行业未来堪忧。他还担忧充斥在代码、研究和网络内容中的“AI垃圾”会进一步污染LLM的训练数据。编程既是艺术又是解决问题的过程,需要创造力和横向思维,而AI无法复制这些能力。 AI代码中的“幻觉”也是一个主要问题,特别是对于缺乏经验难以识别错误的学习者来说。作者呼吁设置防护措施、进行研究,并禁止在学校使用LLM,因为这会对年轻人的认知能力产生有害影响。最后,作者表达了他对教育工作者积极鼓励使用LLM的担忧,这加剧了对AI的依赖。

这篇 Hacker News 讨论帖关注围绕 AI,特别是大型语言模型(LLM)的怀疑论。用户 NitpickLawyer 认为,讨论过于关注感受和假设性场景,而非 AI 客观的性能,并将其与过去的自动驾驶汽车争论相比较。一些评论表示赞同,并指出许多批评者没有跟上 AI 能力的快速发展。 该讨论还深入探讨了 AI 对教育的影响。一位评论者讲述了一个老师鼓励学生使用 LLM 来保持高分的故事,这引发了人们对诸如论文写作等传统作业的真正价值的质疑,因为 AI 可以完成这些作业。然而,发帖者也质疑,如果 LLM 可以完成作业,为什么这会成为问题。大家一致认为教育需要彻底改革。 其他相关的想法包括:担心 LLM 用其自身生成的内容进行再训练,可能会随着时间的推移降低其质量;以及教育需要解决这些局限性。
相关文章

原文

There was a post recently that was dissing AI Skeptics. While the post is funny at times, I feel like it’s absolutely and completely missing the point of the skepticism. Or at least I feel that it is glossing over some massive pain points of said skepticism.

Let’s go over some of the points and the things that stuck out to me as problematic.

Now, mind you, I don’t necessarily disagree here. An AI review helper on an open source project where maintainers are swamped with tickets would be a godsend (assuming its review is actually valuable and can be fine-tuned on the ever-changing codebase).

But we humans shouldn’t outsource our critical thinking completely. It will deteriorate so fast—you blink and suddenly, you have no idea how to do a simple lookup without firing up your LLM agent.

I can feel my blood pressure rising thinking of all the bookkeeping and Googling and dependency drama of a new project. An LLM can be instructed to just figure all that shit out. Often, it will drop you precisely at that golden moment where shit almost works, and development means tweaking code and immediately seeing things work better. That dopamine hit is why I code.

My dopamine comes not from tweaking someone (something) else’s code to make something barely work. My dopamine comes from the fact that I solved some intricate, non-trivial problem. Sure, write me unit tests—that’s awesome. Or write that fifth table join + query code, go for it. But nevertheless, the part where it can be instructed to just figure all that shit out is extremely dangerous. I cannot stress this enough.

You might not like it, because you are an advanced user, but a young person just coming into the craft absolutely needs to figure all that shit out—otherwise all their experience will be something akin to oh I’ll just ask my agent to do that for me. That’s not going to cut it.

And try to see if in an interview you can say, “Oh, I have no idea, I’ll just ask my LLM.” The problem here is not you using a tool to further your goals. The problem is that most people have begun outsourcing their critical thinking and problem-solving abilities. And it shows. Even for me it shows. I tried to write some test code recently and I absolutely forgot how to write table tests because I generate all that. And it’s frightening. I haven’t been using LLMs for long. And you can’t compare this level of generating with snippets, or StackOverflow, or copy and pasting. This is on a different and more sophisticated level.

Now imagine what’s going on in faculties and schools and research where your ability to think is now something like I’ll just ask the LLM. Of course there are always outliers—people who use it like an assistant. But those are the minority. The majority needs guardrails; otherwise there will be a harsh tipping point where suddenly people will no longer remember how to structure code or design a simple application.

Also: let’s stop kidding ourselves about how good our human first cuts really are.

This and the rest of the article ignores a couple of points that are truly bad.

  • If you stop investing in the human first cuts, where do you think the industry will end up? Dead in a ditch.

Sure, this was always an issue. No one wants to invest in starting people. But at least until now, you had to. But if you don’t, that’s it. If no one is training and hiring beginners, there will be no seniors. Give it a couple of generations and you will have people switching away from dev since a ~$20/month LLM apparently can do the same thing (which is absolute bullshit). And then suddenly, there are no more devs around. And maybe that’s the future. Maybe robots will write all the code and there will be only a handful of people maintaining them until they are also gone. But that’s sci-fi. Hopefully.

  • LLMs are training on LLM output and agents can’t help with that. There is already (which is insane since it has been around for only a couple of years) a significant increase in AI slop in research papers and online blogs, etc. Just look up how delve has increased in usage. And, of course, since LLMs are trained on online data, they are re-trained with their own AI slop, further damaging them. Soon bad code will get even worse, basically.

  • Programming is art too. Indeed, architecture is sometimes very much an art form. Technical solutions to problems sometimes require lateral thinking. And solutions to hard problems might even come from an activity completely unrelated to the problem. There are many examples where a solution to a problem came from thinking specifically not like an engineer. Pixar’s shaders, the first UIs, the invention of Hypertext, Smalltalk!

  • And yes, it is stealing. The amount of plagiarism and the amount of ART that LLMs have stolen is insane.

hallucinations

To this the author suggests reading the code. Sure, you are an experienced developer who has read and written many lines of code. However, you are missing the most important problem here. Someone who is still learning will be unable to tell if it is hallucinating or not. Because it reasons like my drunk friend (with absolute conviction), people tend to accept it. They either yet lack the ability to think critically and try to look for different solutions, or they are just tired and burned out by the rest of the things they need to care about and they will say the dreaded line: it looks good enough.

So am I saying don’t use LLMs? No, I’m not. I’m saying it needs serious guardrails, documentation, oversight, usage data, research, and it should definitely be banned from schools. Because it’s doing serious damage to young people’s ability to think, figure out, read, understand, and research.

Further, the person says they are a serious developer. This might actually be a drawback in this situation. They assume that people will think critically, will vet the code, and will understand when it’s flawed. Well, they don’t. Most of the time they just don’t. And reading AI slop for the 100th time, because IT WILL NOT LEARN, is just tiring. While if I read the code from a junior dev and tell them for the 5th time that something is wrong, they might actually make an effort and understand and next time, maybe not make the same mistake.

An LLM will 100% make the same mistake. Over and over and over again. Even if you have an Agent and do some serious training with marginally small loss, it will still stumble on conceptually the same thing. I’m sure it will get better with time, but it’s not there yet. And saying and accepting it like it is, is very dangerous.

And as a last point, I fear very much what’s going on in schools. My son is in school. And the teacher literally said they need to use an LLM otherwise they won’t score high enough because the other pupils are using LLMs and the teacher is using an LLM to grade/check it. This is ridiculous. And if you think that’s okay or you don’t see a serious problem with this, then that’s an even greater problem.

As always, Thanks for reading!

联系我们 contact @ memedata.com