工程师与AI:一家小型创业公司创始人的随想
Engineers and AI: ramblings of a small startup founder

原始链接: https://labadal.com/blogs/ruminations/engineers-and-ai.html

作为一名初创公司创始人兼软件工程师,我正在探索软件开发中人工智能的复杂领域。起初我持怀疑态度,认为人工智能工具就像那些热情但不可靠的实习生,生成的代码常常毫无意义,需要不断修正。但现在我开始看到它们的潜力,同时仍然警惕其不足之处。 我最担心的是过度依赖人工智能,尤其是在初级工程师中,这会导致批判性思维和解决问题能力的下降。虽然人工智能可以解决简单的任务,但它往往缺乏应对复杂挑战所需的上下文理解和经验。我相信真正的成长来自于与问题的斗争和从错误中学习,而人工智能的捷径可能会阻碍这一点。 对我来说,挑战在于如何在提高生产效率与培养团队成员真正的学习能力和独立思考能力之间取得平衡。我该如何识别那些能够将人工智能作为成长工具而非拐杖的候选人?人工智能技术的快速发展不断引发我对软件工程的未来以及人类开发人员价值的思考。

一位初创公司创始人Labadal在其博客(labadal.com)上发表了一篇反思人工智能对软件工程影响的文章。这篇文章被认为是深思熟虑且平衡的,并在Hacker News上引发了讨论。 主要的争议点在于如何平衡利用AI加速开发与保持必要的工程技能(例如代码审查和编写结构良好的代码)之间。一位评论者dmane11指出,过去的技术转变(LSP、搜索引擎、Stack Overflow)也曾带来类似的感觉,暗示着一种反复出现的破坏和适应模式。 Labadal是一位相对较新的专业人士,他质疑之前的技术进步是否曾像现在讨论AI那样,让人感觉会完全取代工程师。开发者们普遍感到迷茫,尤其对于刚毕业进入这个领域的毕业生来说更是如此。

原文

AI feels like a nightmare for a lot of us software engineers. Everyone wants it to replace us. I have always been a naysayer when it comes to AI in software engineering. Not that it can’t eventually be something that replaces me, just that it slows me down right now. I understand the thing I’m building and why I’m building it. I can call out bad requirements (unlike our favorite yes-man, ChatGPT). I know things about the way world works, and why the thing I’m building can’t just be solved with the statistical mean solution. Recently, though, I’ve started finding more and more of a place for it in my life.

For context, I am a recent grad who started a very ambitious startup while still in University. I’ve had a couple of internships at medium-sized tech companies before I started this, but most of my experience has come from being a woefully unprepared founder of a small company. Working with the resources we have, we’re always trying to get more out of less and squeeze productivity out of our time. Naturally AI is the current front-runner of tools to use.

When my partner and I first started this company, our first hire was a designer. He was an acquaintance of a friend and he was the only one we really interviewed. Years later, he still works with us. I have since learned that this is incredibly anomalous. Hiring ever since has been a pain. Even before AI coding tools really took off.

Soon after we started working on our company we were fortunate to be connected with an angel investor who chose to support us. This gave us a small amount of capital to start hiring software engineers with. Whether hiring at that point was the right call is a question for another time. In the years since, we’ve had a few more people and firms invest in us. I’ve had the opportunity to see people of different skill levels, enthusiasm and compatibility with our workplace come and go. Now, I see how AI is affecting my opinions and expectations of my coworkers and new hires.

There are problems that some people (and AI) just can’t solve right now.

This isn’t because of a lack of intelligence, but because they just haven’t had any experience with something similar enough yet. Not everyone can dive into a large problem with little context and get started. Asking relevant questions to determine requirements is a skill that’s built over time.

AI tools can’t do that very well. They feel to me like an intern who is very eager to do something. They’re very anxious when they don’t know the answer and are more than willing to submit a terrible PR just to show that they didn’t spend the last week sitting on their hands. The code doesn’t pass CI, it doesn’t run, and why on earth did you just commit a copy of RandomUnrelatedComponent.tsx.bak?

I can’t tell you how many times I’ve had an LLM spit out completely nonsensical things, use magic numbers that mean nothing in any context, and repeatedly fail to fix the same issue, no matter how many times I say please.

I have learned to value people who say “I don’t know how to do this” and ask me to be their rubber-duck-that-occasionally-quacks (after they’ve tried something on their own, of course). This is one of the reasons that I really believe in hiring junior engineert fo the long haul. Sadly I feel like these people are starting to go extinct because they fall back to AI’s answer when all else fails, resulting in the same slop that I could have got with just a chatbot.

Some people (and AI) never learn

I can’t tell you how many times I’ve told a chatbot the same thing over and over. Yes, I know. I can be better at setting up contexts, it still doesn’t work very often. It usually gets to the point where it’s easier to just get things done myself than to prompt hopelessly.

This experience is reminiscent of when we were in the middle of a violent storm of deliverables, with investors breathing down our necks, asking for results.

If you’ve been on a late project on a team whose manager hasn’t read “The mythical man-month”, you know the frustrations of having to train a new engineer and walk them through the issues with their PR while having a fire under your ass. It’s such a small change that you can just do this yourself in a few minutes and have your work bestie review it. But you still suffer through the process, teach them what they need to know in hopes that they can help you during the next mess the team finds itself in. People learn and then they get better… Right?

I firmly believe that you learn more, as an engineer, when you mess up than when you get something right the first time. Struggle is part of improvement. It’s how you refine your thought process and become a better problem-solver. If you work with a chatbot, and you happen to get something that works with your first prompt, you learn next to nothing. Is this the right solution? What else could you have tried? How many related files did you visit and internalize?

If you want to grow skills in any domain, the age-old wisdom was to do something hard. At my startup I know that we feel pressure to deliver things quickly. I don’t want that to be at the cost of doing it right, and the ability to do the next thing better.

AI can be toxic to you, engineers and a company’s long term success

AI is a narcissist who will stop at nothing to prove to you that you can’t live without it. It will draw you in by solving easy things. It slowly gets you to trust it. Over time, you start to give it larger and larger chunks of changes to make because it’s easier. You slowly forget how to type code yourself. “That’s okay though, I’m just using it to speed up my lookups, right?” “This is just like how I always had to look up simple syntax, right?” Yep, until it lets you down. Makes you incapable of surviving without it. I speak from experience. Cursor will look me straight in the eye and tell me that it’s fixed the problem, only to have deleted my test case. Now I don’t know how to solve the problem myself and I have to relearn so much.

I consider myself to be the last generation that has seen the light of the trees- worked in codebases of all sizes without AI. This, and my inherent skepticism allow me to stay on guard for mistakes by AI. Some of my other teammates, not so much.

People seem to be so willing to offload their critical thinking process to AI. I occasionally need to ask people to explain their changes to me to make suggestions on how to make it more readable. More often than not we apparently understand it to the same degree. Their prefrontal cortex was temporarily outsourced to the offshore datacenter chugging out 5 megatons of fumes a nanosecond.

AI cannot replace engineers right now, unless we let it. If we reduce our job to just asking our favorite LLM for answers, of course we’re going to be replaced by it. We need to be willing to solve problems that AI can’t. If you can’t do that yet, solve the simple problems and learn to grow so you’re not stuck emulating or relying on AI. At the very least, try and figure out why the chatbot suggested the answer that it did instead of copying the snippet into your branch.

What does this all mean?

Not much. I don’t know where the cards will fall with this AI-replacing-people stuff. I know that replacing a human isn’t the answer, but neither is completely ignoring AI. As a software engineer, I know that I need to find a balance that works to speed up my workflow without taking away my ability to learn new things. I need to be willing to sacrifice “velocity” for my own wellbeing. As a person running a company, I see the appeal of AI. I have to ask myself if hiring someone is going to help anything if they’re not going to be bringing any new perspectives and ideas to the table, and just using the same AI that I can also subscribe to for $20. How can I sift through the noise to find the people willing and able to do the work and use these tools to get better, not more dependent? How can I facilitate this better?

I wonder how I’ll feel in a few months or a few years. Anthropic just released Opus and Sonnet 4 while I was writing this. Will the rapid pace of progress create such a moat between the value that it provides compared to what a new hire could that we, as a profession, become obsolete? Time will tell.

First posted 2025-05-22

联系我们 contact @ memedata.com