发生了什么事?
What is going on right now?

原始链接: https://catskull.net/what-the-hell-is-going-on-right-now.html

## AI驱动的工程危机 软件工程领域正在酝酿一种日益增长的挫败感。经验丰富的工程师们因审查和调试由Claude等AI工具生成的、缺乏基本理解或功能的“感觉代码”而感到精疲力竭。尽管他们真心希望指导初级开发者,但他们的反馈却越来越多地被忽略,LLM生成的代码直接作为下一版本重新提交。 对AI的依赖并未带来效率提升,反而浪费了时间,工程师们花费数周时间审查和试图挽救AI生成的“成果”。作者质疑这种体系的经济可行性,指出即使是AI公司本身也尚未盈利,而是依靠风险投资,其长期前景令人担忧。 最终,令人担忧的不仅仅是资源浪费,还有真正技能发展和指导价值的流失。作者提倡暂时停止使用AI工具,认为独立解决问题和学习比依赖潜在缺陷和不可持续的技术更有价值。

相关文章

原文

What the hell is going on right now?

Engineers are burning out. Orgs expect their senior engineering staff to be able to review and contribute to “vibe-coded” features that don’t work. My personal observation is that the best engineers are highly enthusiastic about helping newer team members contribute and learn.

Instead of their comments being taken to heart, reflected on, and used as learning opportunities, hapless young coders are instead using feedback as simply the next prompt in their “AI” masterpiece. I personally have witnessed and heard first-hand accounts where it was incredibly obvious a junior engineer was (ab)using LLM tools.

In a recent company town-hall, I watched as a team of junior engineers demoed their latest work. I couldn’t tell you what exactly it did, or even what it was supposed to do - it didn’t seem like they themselves understood. However, at a large enough organization, it’s not about what you do, its about what people think you do. Championing their “success”, a senior manager goaded them into bragging about their use of “AI” tools to which they responded “This is four thousand lines of code written by Claude”. Applause all around.

I was asked to add a small improvement to an existing feature. After reviewing the code, I noticed a junior engineer was the most recent to work on that feature. As I always do, I reached out to let them know what I’d be doing and to see if they had any insight that would be useful to me. Armed with the Github commit URL, I asked for context around their recent change. I can’t know for sure, but I’d be willing to put money down that my exact question and the commit were fed directly into an LLM which was then copy and pasted back to me. I’m not sure why, but I felt violated. It felt wrong.

A friend recently confided in me that he’s been on a team of at least 5 others that have been involved in reviewing a heavily vibe-coded PR over the past month. A month. Reviewing slop produced by an LLM. What are the cost savings of paying ChatGPT $20 a month and then having a literal team of engineers try and review and merge the code?

Another friend commiserated the difficulty of trying to help an engineer contribute at work. “I review the code, ask for changes, and then they immediately hit me with another round of AI slop.”

Here’s the thing - we want to help. We want to build good things. Things that work well, that make people’s lives easier. We want to teach people how to do software engineering! Any engineer is standing entirely on the shoulders of their mentors and managers who’ve invested time and energy into them and their careers. But what good is that investment if it’s simply copy-pasted into the latest “model” that “is literally half a step from artificial general intelligence”? Should we instead focus our time and energy into training the models and eliminate the juniors altogether?

What a sad, dark world that would be.

Here’s an experiment for you: stop using “AI”. Try it for a day. For a week. For a month.

Recently, I completely reset my computer. I like to do it from time to time. As part of that process I prune out any software that I no longer use. I’ve been paying for Claude Pro for about 6 months. But slowly, I’ve felt like it’s just a huge waste of time. Even if I have to do a few independent internet searches and read through a few dozen stack overflow and doc pages, my own conclusion is so much more reliable and accurate than anything an LLM could ever spit out.

So what good are these tools? Do they have any value whatsoever?

Objectively, it would seem the answer is no. But at least they make a lot of money, right?

Is anyone making money on AI right now? I see a pipeline that looks like this:

  • “AI” is applied to some specific, existing area, and a company spins up around it because it’s so much more “efficient”
  • AI company gets funding from venture capitalists
  • AI company give funding to AI service providers such as OpenAI in the form of paying for usage credits
  • AI company evaporates

This isn’t necessarily all that different than the existing VC pipeline, but the difference is that not even OpenAI is making money right now. I believe this is because the technology is inherently flawed and cannot scale to meet the demand. It simply consumes too much electricity to ever be economically viable, not to mention the serious environmental concerns.

We can say our prayers that Moore’s Law will come back from the dead and save us. We can say our prayers that the heat death of the universe will be sufficiently prolonged in order for every human to become a billionaire. We can also take an honestly not even hard look at reality and realize this is a scam.

The emperor is wearing no clothes.

联系我们 contact @ memedata.com