国际象棋引擎并没有取代马格努斯·卡尔森,人工智能也不会取代你。
Chess engines didn't replace Magnus Carlsen, and AI won't replace you

原始链接: https://coding-with-ai.dev/posts/use-ai-like-magnus-carlsen/

## 编码助手作为训练伙伴:一个国际象棋类比 作者将马格努斯·卡尔森使用国际象棋引擎与自身使用编码助手的经验进行了类比。正如卡尔森不是*让*引擎替他下棋,而是*利用*它进行赛后分析以学习和提高一样,作者避免直接使用人工智能生成的代码。相反,他们将代码视为一个学习机会——一个剖析建议、识别错误和发现新方法的机会。 这反映了卡尔森在研究AlphaZero后的进化,他采纳了引擎非常规对局中揭示的新策略。关键在于理解人工智能*为何*会提出某些解决方案,而不是盲目接受它们。 作者强调代码审查变得至关重要——一种“赛后分析”,确保正确性、一致性和与项目需求的对齐。就像国际象棋引擎并没有削弱这项运动一样,编码助手也不是在取代开发者,而是在增强他们的技能并促成更具雄心的项目。最终,学习和质量的责任仍然在于人类开发者。

相关文章

原文
October 22, 2025

I've been thinking about how Magnus Carlsen talks about using chess engines to train. He doesn't let them play for him during the game itself (that would be cheating), but after the game? That's when the real learning happens: he reviews his games with an engine, finds mistakes, discovers better moves he didn't see.

Lately I've realized that's exactly the relationship I've developed with coding assistants. I don't let them commit straight to main (that would be reckless). But after they generate code? That's when the learning happens. Code review becomes like chess post-game analysis where I'm dissecting what the LLM produced, finding the subtle mistakes, learning new patterns I hadn't considered.

How Magnus uses chess engines (and what developers can learn)

After DeepMind's AlphaZero beat Stockfish, Carlsen studied those games deeply. In his own words:

I have become a very different player in terms of style than I was a bit earlier, and it has been a great ride.

According to his coach, the change came from wild ideas AlphaZero uncovered: sacrificing pieces for long-term advantage, pushing the rook pawn aggressively, using the king as an active fighter.

The key thing: Carlsen isn't blindly memorising engine moves. He's learning why those moves work by reviewing them. He still plays the game himself, but with a broader vision. The engine is a coach, not an autopilot.

That's the parallel I see with coding assistants. They can crank out solutions in seconds, show me approaches I didn't consider. But if I just accept suggestions without thought, I'm basically hoping Stockfish in my ass

The sweet spot is using the LLM to augment my judgment. Let it show me options, then I decide what to do with that information.

Code review: the developer's post-game analysis

Code review becomes your post-game analysis. Magnus reviews his games with engines to learn from their superior analysis. You review LLM code to ensure it's actually correct. Both demand expertise: Magnus needs it to understand why the engine's moves work; you need it to distinguish code that looks good from code that actually makes sense.

When I have an LLM generate code, I don't merge it right away. I review it with the same healthy skepticism I'd apply to a human contributor. Much more, actually - humans inventing imaginary API endpoints is much rarer. (Though I've done it myself before the LLM era more times than I'm ready to admit, confidently coding against an endpoint I was sure existed.)

LLM code needs a human eye for things machines aren't good at. Does this actually fit our requirements? Is it idiomatic? Did it consider the edge cases?

Code review is the gate to the codebase where nothing ships until a human is willing to take responsibility. You catch the mistakes, ensure consistency, sanity-check the diff. Same things you'd do for a junior developer's big PR.

the work shiftedTime estimates based on personal experience, as of October 2025

Using review as opportunity for learning

Magnus once said he doesn't fear computers because he uses them to train, so he can face humans even stronger. I'm not worried about coding assistants replacing me. I'm using them to level myself up.

The option to copy-paste without understanding has always existed (StackOverflow, now LLMs). The tool doesn't decide whether I learn or not. I do. Will I lose problem-solving skills by blindly accepting every suggestion? Absolutely. But that's on me.

From "looks good" to "makes sense"

After months of working with ai coding tools, i've noticed a shift in approaching problems. I'm less afraid to attempt ambitious things. Not because the LLM will magically do it for me, but because I know I have a sparring partner that will catch my blunders and occasionally point out a good shortcut.

Right now, I can talk to a computer and it talks back. And yeah, it drives me a little nuts at times. But like any good sparring partner, it's making me better.

Chess engines changed how the game is played and studied. People worried it would "solve" chess and kill the game. Instead, chess is more popular than ever.

The same pattern repeats with coding assistants. People worry that autonomous agents will take our jobs. But what we're actually seeing is how they augment us and change the way we approach software development, though mainly for those whose stack is widely covered by LLM training data. True AGI is still far away,

联系我们 contact @ memedata.com