我还在科技圈有位置吗?
Do I belong in tech anymore?

原始链接: https://ky.fyi/posts/ai-burnout

## 倦怠与科技行业现状 在从事了多年充实但日益消耗的设计和工程工作后,我最近辞去了工作。这份工作本身——维护设计系统、弥合设计/工程差距——是成功的,获得了积极的团队反馈和切实的改进。然而,日益增长的幻灭感和对工作价值的质疑导致了这一决定。 导致这种倦怠的主要因素是人工智能在工作场所的普遍且常常不受控制的整合。目睹未经审查的AI生成代码被合并、未经核实的聊天机器人回复以及批判性思维的普遍下降,让我感到疏离,并开始质疑行业的未来。 这种幻灭感不仅仅源于人工智能。科技行业价值观的转变——从公平和社会责任的进步理念到优先考虑利润和政治一致性——助长了“理想的丧失”。我正在哀悼一个不再反映我曾经信仰的原则的行业。 目前,我正在优先恢复,专注于个人福祉,并在科技领域之外的活动中重新发现乐趣。虽然对我在该领域的未来感到不确定,但我正在重申核心信念:质量需要付出努力,有意义的工作需要时间,而人际连接至关重要。

## 技术行业担忧与就业困境 最近一篇名为“我还能继续在技术行业发展吗?”的 Hacker News 帖子引发了行业内普遍的焦虑。许多评论者对原帖作者的幻灭感表示共鸣,认为现状“黯淡”,共同价值观正在丧失。 一个主要担忧是当前的就业市场。多位用户表示即使拥有丰富经验,也很难找到工作,并担心财务稳定。建议从*开始*求职过程,到考虑技术行业以外的替代职业道路,甚至搬迁到更实惠的地区。 人工智能的兴起也是一个重要的压力来源,人们担心它对工作保障和工作质量的影响。一些人担心一种“时不我待”的心态正在驱使人们优先考虑财富积累而非有意义的工作,尤其是在湾区等高成本地区。另一些人也指出了人工智能生成的工作产品存在的问题,例如编写糟糕的任务描述。总之,这场讨论凸显了人们对快速变化的科技环境中日益增长的不确定感和对意义的追寻。
相关文章

原文

Two weeks ago, I quit my job.

It wasn’t a bad job, not by most metrics. It ticked the boxes a job is supposed to tick: good pay. Health insurance. Remote work. Time off. Nice coworkers.

I worked as our org’s only design engineer and maintainer of our design system. My job was to build components, to polish the final product that went out into the world, and to bridge gaps between design and engineering. During my time, I doubled surface coverage of our components, chipped away at bugs, and fixed accessibility issues. I published documentation. I administered twice-yearly surveys which indicated high satisfaction from the team—up significantly compared to when I began. I was doing good work.

And yet, work was rendering me increasingly miserable. I questioned myself. Why am I here? Does any of this work actually matter? And if I stop caring about the quality of my work… will anyone notice? (An uncomfortable thought.)

I knew I was tired, but I wasn’t sure if I wanted to quit. I took a week off to consider it, and told myself: if you still want to leave at the end of this week, hand in your resignation.

The following Monday, I handed in my resignation. I felt immediate relief. I had nothing else lined up, but I knew I needed to go. I’m unsure when (or if) I’ll return to full-time tech work.

What happened?

A hand holds an iPhone with a shattered glass exterior.

Not long after quitting, my phone dropped and shattered. Photo by the author.

The psychic toll of AI

Consider the following scenarios:

  • You join a meeting with a coworker. Your coworker has enabled an AI tool to automatically take notes and summarize the meeting. They do not ask for consent to turn it on. The tool mischaracterizes what you discuss.
  • A team lead adds an AI chatbot to a Slack channel. Anyone can tag the bot to answer questions about the company’s products. Coworkers tag the chatbot many times a day. You never see someone check that the bot’s responses are correct.
  • An engineer adds 12,000 lines of code affecting your app’s authentication. They ask that it be reviewed and merged same-day. Another engineer enlists a “swarm” of AI agents to review the code. The code merges with no one having read the full set of changes.
  • A designer is tasked with exploring a new feature. They prompt an AI tool for an interactive prototype. Design crit is spent analyzing visual details in the generated prototype, with minimal discussion of core ideas, goals, or tradeoffs.
  • One of your pull requests has been open for a few days. You ask other engineers to leave a code review. Minutes later, an engineer pastes a review that was generated by an AI tool. There are no additional thoughts of their own.
  • You point an engineer to the relevant section of a library’s docs in order to request a feature. They tell you that the feature request is not possible, and send a screenshot of their chat with an AI tool as proof.
  • Documents and code are being generated faster than team members can review. You get the feeling that most people have stopped reading altogether.
  • Organization leadership has mandated that each person adopt new AI tools to “uplevel” themselves and their team.

I encountered each of these scenarios over the past few years, and each one left me wondering: do I raise an issue about AI here? Do I ask my coworker to disable their note-taking tool, or do I allow them to record me? (Where does the data go? Who is reading it? Do we retain knowledge in the same way without manual note-taking?) Do I voice concerns over unread code entering the codebase, and the consequences of that pattern for institutional knowledge-building? Do I ask others on the design team to delay prototyping until later in the design process? Is it already too late to ask? Has the team already shipped the code, already designed the feature, already moved onto the next task? If someone requests my review on a pull request that was clearly vibe coded, do I review the code and write comments as usual, or send it back to them for self-review? Would initiating these discussions result in interpersonal stress? Should I just let things slide? Would I become known as a “difficult” coworker for pushing back on AI use? Does any of it really matter? Does anyone really care?

All of these questions consumed energy. Whether I decided to confront them or not was moot: they left me tired and alienated either way. AI had hooked its tendrils into every corner of my work life. Even if I, personally, abstained from most AI usage, I was steeped in an environment which made it impossible to avoid. Pushing back felt futile.

The explosion of AI has played a significant role in my own burnout. Worse, it feels inescapable. Few tech organizations are taking a principled stance against AI use.

But AI use is only one part of broader social trends within tech that leave me questioning whether I should remain here.

The loss of an ideal

When I started full-time design and dev work in the 2010s, tech was generally understood to be a progressive place. This was peak “fun tech job” era, with magazines publishing glossy covers about life at Google. Apple had a gay CEO!

The web was still in flux; as a designer, the prospect of shaping sites into more usable forms excited me. Usability and user-centered design were hot topics. Budding federal organizations like 18F and the United States Digital Service were embarking on meaningful technology-enabled civic work.

After Trump’s first election, people recoiled with shock and disbelief. How could this happen? Many organizations distanced themselves from the administration and reiterated their commitment to equality. Then, the COVID-19 pandemic hit, alongside a surge of protests for racial justice. There was a glimpse of unity. Biden was elected and swiftly proclaimed a return to “normal”.

“Normal” landed us where we are now: the second Trump administration, more flagrantly corrupt and cruel than the first. Protests surge (larger than ever!) amidst a quieter type of elite resignation. The words “equity” and “inclusion” are no more.

Tech organizations have now given up on pushing back against an unethical and violent administration, deciding that it is in their best business interest to flatter the president’s ego with gold trophies and pandering praise. Elon Musk and the “Department of Government Efficiency” took a sledgehammer to 18F and replaced it with National Design Studio, a propaganda shop whose main talent is building expensive and inaccessible landing pages.

Leaders at Google have abandoned former climate pledges as they work to build new data centers powered by natural gas turbines which emit more carbon than the entire city of San Francisco. Other tech CEOs smile for photos alongside war criminals.

A tweet from Guillermo Rauch, who posts: "Enjoyed my discussion with PM Netanyahu on how AI education and literacy will keep our free societies ahead. We spoke about AI empowering everyone to build software and the importance of ensuring it serves quality and progress. Optimistic for peace, safety, and greatness for Israel and its neighbors." Attached is a photo of Rauch posing with Benjamin Netanyahu. Posted September 29, 2025.

Guillermo Rauch, CEO of Vercel, poses with Benjamin Netanyahu, prime minister of Israel. The International Criminal Court has issued an arrest warrant for Netanyahu for “war crimes of starvation as a method of warfare and of intentionally directing an attack against the civilian population.” Original tweet.

I keep asking myself:

What happened to the principles that were professed a decade ago? To address climate change? To reduce racial, gender, and economic inequality? To “don’t be evil”?

Were these principles abandoned, or were they merely born of convenience?

Has tech always been like this? Was I just blind to it before?

When I say that I am burnt out I do not mean simply that I am tired. I’m referring to the “emotional experience of political defeat”:

Burnout in Freudenberger’s articles from this period is not just defined in terms of physical tiredness as a result of doing too many things; rather, it emerges from emotional investment in a cause and from the disappointments that arise when flaws in a political project become apparent. Freudenberger’s concept not only describes physical exhaustion but also acknowledges the need to deal with anger caused by grief brought about by the “loss of an ideal.” Burnout in the context of social justice projects thus often involves a process of mourning, according to Freudenberger. Returning to his earlier writings on burnout makes it clear that when understood as a malaise arising from politically committed activities, burnout cannot be equated with tiredness or stress.

Hannah Proctor, Burnout, p. 92

I love designing and building things for the web, but I’m mourning an industry that does not share the ideals I once thought it did.


I understand why people use AI. Life can be difficult and confusing. Prompting the machine is so alluring—it answers with such certainty! How could it be wrong? (And even if it is a little wrong, well… hasn’t it saved time? Does it need to be perfect?) The temptation is real.

I don’t blame people for opting to use tools that promise quick, convenient solutions to problems. We all operate under capitalism. Many of us have bullshit jobs where the goal is not, in fact, to make something good, or even to learn, but to simply make money to pay rent and medical expenses. To hopefully find a little joy on the side. The whole system is broken; AI alone didn’t break it, but it is widening the cracks.

I guess what I’m trying to say is I wish none of us had to live like this. I would like to imagine a future that does not look like this.

Ironically, what I’ve gained from AI is a deeper appreciation for human communication, in all its messy imperfection. The point of a code review is not simply for good code to make it into a codebase, but to build institutional knowledge as people debate and iterate and compromise, slow as it may be. Friction is good.

I’ve posted it before, but it feels evergreen

The two hardest problems in Computer Science are

  1. Human communication
  2. Getting people in tech to believe that human communication is important

— Hazel Weakly (@hazelweakly.me) March 31, 2026 at 2:46 AM

Where do I go from here?

No matter how rapidly technology changes, I am coalescing around some core beliefs:

  1. Things that are worth doing are worth doing well.
  2. Things that are done well require time and effort.
  3. You make meaning through the doing.
  4. Ideas are common; effort is not.
  5. There are no shortcuts.

I am, as it stands, without a job. Recovering from burnout will take time. Thankfully, I have savings that afford me the privilege to take that time. I’m distancing myself from social media and news, at least for a little while. At some point, I will need to decide if I want to remain in this industry, and if so, where to go next.

In the meantime, I’m going to the gym. (Crossfit, weirdly.) I’m learning more about how synthesizers work and I’m generating different sounds. I’m looking at birds. I’m looking at my cat. I’m continuing to build tools to help trans people with legal name changes. I’m spending time with friends.

Eventually I will find new work. Who knows where.

联系我们 contact @ memedata.com