Two weeks ago, I quit my job.
It wasn’t a bad job, not by most metrics. It ticked the boxes a job is supposed to tick: good pay. Health insurance. Remote work. Time off. Nice coworkers.
I worked as our org’s only design engineer and maintainer of our design system. My job was to build components, to polish the final product that went out into the world, and to bridge gaps between design and engineering. During my time, I doubled surface coverage of our components, chipped away at bugs, and fixed accessibility issues. I published documentation. I administered twice-yearly surveys which indicated high satisfaction from the team—up significantly compared to when I began. I was doing good work.
And yet, work was rendering me increasingly miserable. I questioned myself. Why am I here? Does any of this work actually matter? And if I stop caring about the quality of my work… will anyone notice? (An uncomfortable thought.)
I knew I was tired, but I wasn’t sure if I wanted to quit. I took a week off to consider it, and told myself: if you still want to leave at the end of this week, hand in your resignation.
The following Monday, I handed in my resignation. I felt immediate relief. I had nothing else lined up, but I knew I needed to go. I’m unsure when (or if) I’ll return to full-time tech work.
What happened?

Not long after quitting, my phone dropped and shattered. Photo by the author.
The psychic toll of AI
Consider the following scenarios:
- You join a meeting with a coworker. Your coworker has enabled an AI tool to automatically take notes and summarize the meeting. They do not ask for consent to turn it on. The tool mischaracterizes what you discuss.
- A team lead adds an AI chatbot to a Slack channel. Anyone can tag the bot to answer questions about the company’s products. Coworkers tag the chatbot many times a day. You never see someone check that the bot’s responses are correct.
- An engineer adds 12,000 lines of code affecting your app’s authentication. They ask that it be reviewed and merged same-day. Another engineer enlists a “swarm” of AI agents to review the code. The code merges with no one having read the full set of changes.
- A designer is tasked with exploring a new feature. They prompt an AI tool for an interactive prototype. Design crit is spent analyzing visual details in the generated prototype, with minimal discussion of core ideas, goals, or tradeoffs.
- One of your pull requests has been open for a few days. You ask other engineers to leave a code review. Minutes later, an engineer pastes a review that was generated by an AI tool. There are no additional thoughts of their own.
- You point an engineer to the relevant section of a library’s docs in order to request a feature. They tell you that the feature request is not possible, and send a screenshot of their chat with an AI tool as proof.
- Documents and code are being generated faster than team members can review. You get the feeling that most people have stopped reading altogether.
- Organization leadership has mandated that each person adopt new AI tools to “uplevel” themselves and their team.
I encountered each of these scenarios over the past few years, and each one left me wondering: do I raise an issue about AI here? Do I ask my coworker to disable their note-taking tool, or do I allow them to record me? (Where does the data go? Who is reading it? Do we retain knowledge in the same way without manual note-taking?) Do I voice concerns over unread code entering the codebase, and the consequences of that pattern for institutional knowledge-building? Do I ask others on the design team to delay prototyping until later in the design process? Is it already too late to ask? Has the team already shipped the code, already designed the feature, already moved onto the next task? If someone requests my review on a pull request that was clearly vibe coded, do I review the code and write comments as usual, or send it back to them for self-review? Would initiating these discussions result in interpersonal stress? Should I just let things slide? Would I become known as a “difficult” coworker for pushing back on AI use? Does any of it really matter? Does anyone really care?
All of these questions consumed energy. Whether I decided to confront them or not was moot: they left me tired and alienated either way. AI had hooked its tendrils into every corner of my work life. Even if I, personally, abstained from most AI usage, I was steeped in an environment which made it impossible to avoid. Pushing back felt futile.
The explosion of AI has played a significant role in my own burnout. Worse, it feels inescapable. Few tech organizations are taking a principled stance against AI use.
But AI use is only one part of broader social trends within tech that leave me questioning whether I should remain here.
The loss of an ideal
When I started full-time design and dev work in the 2010s, tech was generally understood to be a progressive place. This was peak “fun tech job” era, with magazines publishing glossy covers about life at Google. Apple had a gay CEO!
The web was still in flux; as a designer, the prospect of shaping sites into more usable forms excited me. Usability and user-centered design were hot topics. Budding federal organizations like 18F and the United States Digital Service were embarking on meaningful technology-enabled civic work.
After Trump’s first election, people recoiled with shock and disbelief. How could this happen? Many organizations distanced themselves from the administration and reiterated their commitment to equality. Then, the COVID-19 pandemic hit, alongside a surge of protests for racial justice. There was a glimpse of unity. Biden was elected and swiftly proclaimed a return to “normal”.
“Normal” landed us where we are now: the second Trump administration, more flagrantly corrupt and cruel than the first. Protests surge (larger than ever!) amidst a quieter type of elite resignation. The words “equity” and “inclusion” are no more.
Tech organizations have now given up on pushing back against an unethical and violent administration, deciding that it is in their best business interest to flatter the president’s ego with gold trophies and pandering praise. Elon Musk and the “Department of Government Efficiency” took a sledgehammer to 18F and replaced it with National Design Studio, a propaganda shop whose main talent is building expensive and inaccessible landing pages.
Leaders at Google have abandoned former climate pledges as they work to build new data centers powered by natural gas turbines which emit more carbon than the entire city of San Francisco. Other tech CEOs smile for photos alongside war criminals.

Guillermo Rauch, CEO of Vercel, poses with Benjamin Netanyahu, prime minister of Israel. The International Criminal Court has issued an arrest warrant for Netanyahu for “war crimes of starvation as a method of warfare and of intentionally directing an attack against the civilian population.” Original tweet.
I keep asking myself:
What happened to the principles that were professed a decade ago? To address climate change? To reduce racial, gender, and economic inequality? To “don’t be evil”?
Were these principles abandoned, or were they merely born of convenience?
Has tech always been like this? Was I just blind to it before?
When I say that I am burnt out I do not mean simply that I am tired. I’m referring to the “emotional experience of political defeat”:
Burnout in Freudenberger’s articles from this period is not just defined in terms of physical tiredness as a result of doing too many things; rather, it emerges from emotional investment in a cause and from the disappointments that arise when flaws in a political project become apparent. Freudenberger’s concept not only describes physical exhaustion but also acknowledges the need to deal with anger caused by grief brought about by the “loss of an ideal.” Burnout in the context of social justice projects thus often involves a process of mourning, according to Freudenberger. Returning to his earlier writings on burnout makes it clear that when understood as a malaise arising from politically committed activities, burnout cannot be equated with tiredness or stress.
Hannah Proctor, Burnout, p. 92
I love designing and building things for the web, but I’m mourning an industry that does not share the ideals I once thought it did.
I understand why people use AI. Life can be difficult and confusing. Prompting the machine is so alluring—it answers with such certainty! How could it be wrong? (And even if it is a little wrong, well… hasn’t it saved time? Does it need to be perfect?) The temptation is real.
I don’t blame people for opting to use tools that promise quick, convenient solutions to problems. We all operate under capitalism. Many of us have bullshit jobs where the goal is not, in fact, to make something good, or even to learn, but to simply make money to pay rent and medical expenses. To hopefully find a little joy on the side. The whole system is broken; AI alone didn’t break it, but it is widening the cracks.
I guess what I’m trying to say is I wish none of us had to live like this. I would like to imagine a future that does not look like this.
Ironically, what I’ve gained from AI is a deeper appreciation for human communication, in all its messy imperfection. The point of a code review is not simply for good code to make it into a codebase, but to build institutional knowledge as people debate and iterate and compromise, slow as it may be. Friction is good.
I’ve posted it before, but it feels evergreen
The two hardest problems in Computer Science are
- Human communication
- Getting people in tech to believe that human communication is important
— Hazel Weakly (@hazelweakly.me) March 31, 2026 at 2:46 AM
Where do I go from here?
No matter how rapidly technology changes, I am coalescing around some core beliefs:
- Things that are worth doing are worth doing well.
- Things that are done well require time and effort.
- You make meaning through the doing.
- Ideas are common; effort is not.
- There are no shortcuts.
I am, as it stands, without a job. Recovering from burnout will take time. Thankfully, I have savings that afford me the privilege to take that time. I’m distancing myself from social media and news, at least for a little while. At some point, I will need to decide if I want to remain in this industry, and if so, where to go next.
In the meantime, I’m going to the gym. (Crossfit, weirdly.) I’m learning more about how synthesizers work and I’m generating different sounds. I’m looking at birds. I’m looking at my cat. I’m continuing to build tools to help trans people with legal name changes. I’m spending time with friends.
Eventually I will find new work. Who knows where.