儿童游戏:科技新一代与思考的终结
Child's Play: Tech's new generation and the end of thinking

原始链接: https://harpers.org/archive/2026/03/childs-play-sam-kriss-ai-startup-roy-lee/

旧金山越来越脱离现实,它的广告不再关注基本需求,而是假定每个人都是创业者。这种脱节通过令人不安的广告牌表现得淋漓尽致——“SOC 2认证在你AI女友和你分手之前完成”——与城市可见的无家可归和精神健康问题形成鲜明对比。 普遍的信息不是关于*消费*,而是关于*创造*,这种压力让这座城市的居民感到格格不入。 这种怪异现象的核心是人工智能时代出现的一个新阶层:那些拥有“能动性”的人——一种不顾共识或许可的、不懈的*行动*驱动力。 罗伊·李(Roy Lee)是这种精神的代表人物,他是极具争议的初创公司Cluely的创始人,该公司开发了一款用于处理日常办公任务的AI助手。 尽管产品存在故障且广受诟病,但Cluely却靠着炒作和风险投资而蓬勃发展。 李代表着一种转变,即传统的精英主义不如纯粹的意志力重要。 令人担忧的是日益加剧的两极分化:一个由人工智能驱动的精英阶层,以及一个被淘汰的“永久底层”。 虽然对超级智能的担忧停滞不前,但真正的担忧不是人工智能接管,而是人类对它的依赖,失去这些科技领袖珍视的——并且积极培养的能动性,甚至通过诸如精子竞赛和病毒式炒作之类的奇特行为。 看来,未来属于那些*促成*事情发生的人,即使这些事情最终……毫无意义。

一篇《哈珀杂志》的文章引发了黑客新闻的讨论,凸显了人们对人工智能的影响以及硅谷不断变化的格局的担忧。用户对文章中将旧金山描绘成沉迷于创业文化和“晦涩的B2B服务”的城市感到疏离。 一个核心主题是潜在的“分叉事件”——一个少数精通人工智能的精英阶层蓬勃发展,而由于人工智能日益强大的能力,大部分人口变得“无用”的未来。一位评论员指出,人工智能需要人类指令,但越来越多的人似乎无法在没有人工智能的情况下运作。 讨论表达了对传统技能(如智力和创造力)可能因人工智能编写代码并可能超越人类认知能力而贬值的担忧,从而使人类的理性和思考变得过时。总体情绪是对硅谷可能加剧不平等和贬低个人的潜力持批评态度。
相关文章

原文

The first sign that something in San Francisco had gone very badly wrong was the signs. In New York, all the advertising on the streets and on the subway assumes that you, the person reading, are an ambiently depressed twenty-eight-year-old office worker whose main interests are listening to podcasts, ordering delivery, and voting for the Democrats. I thought I found that annoying, but in San Francisco they don’t bother advertising normal things at all. The city is temperate and brightly colored, with plenty of pleasant trees, but on every corner it speaks to you in an aggressively alien nonsense. Here the world automatically assumes that instead of wanting food or drinks or a new phone or car, what you want is some kind of arcane B2B service for your startup. You are not a passive consumer. You are making something.

This assumption is remarkably out of step with the people who actually inhabit the city’s public space. At a bus stop, I saw a poster that read: today, soc 2 is done before your ai girlfriend breaks up with you. its done in delve. Beneath it, a man squatted on the pavement, staring at nothing in particular, a glass pipe drooping from his fingers. I don’t know if he needed SOC 2 done any more than I did. A few blocks away, I saw a billboard that read: no one cares about your product. make them. unify: transform growth into a science. A man paced in front of the advertisement, chanting to himself. “This . . . is . . . necessary! This . . . is . . . necessary!” On each “necessary” he swung his arms up in exaltation. He was, I noticed, holding an alarmingly large baby-pink pocketknife. Passersby in sight of the billboard that read wearable tech shareable insights did not seem piqued by the prospect of having their metrics constantly analyzed. I couldn’t find anyone who wanted to prompt it. then push it. After spending slightly too long in the city, I found that the various forms of nonsense all started to bleed into one another. The motionless people drooling on the sidewalk, the Waymos whooshing around with no one inside. A kind of pervasive mindlessness. Had I seen a billboard or a madman preaching about “a CRM so smart, it updates itself”? Was it a person in rags muttering about how all his movements were being controlled by shadowy powers working out of a data center somewhere, or was it a car?

Somehow people manage to live here. But of all the strange and maddening messages posted around this city, there was one particular type of billboard that the people of San Francisco couldn’t bear. People shuddered at the sight of it, or groaned, or covered their eyes. The advertiser was the most utterly despised startup in the entire tech landscape. Weirdly, its ads were the only ones I saw that appeared to be written in anything like English:

hi my name is roy
i got kicked out of school for cheating.
buy my cheating tool
cluely.com

Cluely and its co-founder Chungin “Roy” Lee were intensely, and intentionally, controversial. They’re no longer in San Francisco, having been essentially chased out of the city by the Planning Commission. The company is loathed seemingly out of proportion to what its product actually is, which is a janky, glitching interface for ChatGPT and other AI models. It’s not in a particularly glamorous market: Cluely is pitched at ordinary office drones in their thirties, working ordinary bullshit email jobs. It’s there to assist you in Zoom meetings and sales calls. It involves using AI to do your job for you, but this is what pretty much everyone is doing already. The cafés of San Francisco are full of highly paid tech workers clattering away on their keyboards; if you peer at their screens to get a closer look, you’ll generally find them copying and pasting material from a ChatGPT window. A lot of the other complaints about Cluely seem similarly hypocritical. The company is fueled by cheap viral hype, rather than an actual workable product—but this is a strange thing to get upset about when you consider that, back in the era of zero interest rates, Silicon Valley investors sank $120 million into something called the Juicero, a Wi-Fi-enabled smart juicer that made fresh juice from fruit sachets that you could, it turned out, just as easily squeeze between your hands.

What I discovered, though, is that behind all these small complaints, there’s something much more serious. Roy Lee is not like other people. He belongs to a new and possibly permanent overclass. One of the pervasive new doctrines of Silicon Valley is that we’re in the early stages of a bifurcation event. Some people will do incredibly well in the new AI era. They will become rich and powerful beyond anything we can currently imagine. But other people—a lot of other people—will become useless. They will be consigned to the same miserable fate as the people currently muttering on the streets of San Francisco, cold and helpless in a world they no longer understand. The skills that could lift you out of the new permanent underclass are not the skills that mattered before. For a long time, the tech industry liked to think of itself as a meritocracy: it rewarded qualities like intelligence, competence, and expertise. But all that barely matters anymore. Even at big firms like Google, a quarter of the code is now written by AI. Individual intelligence will mean nothing once we have superhuman AI, at which point the difference between an obscenely talented giga-nerd and an ordinary six-pack-drinking bozo will be about as meaningful as the difference between any two ants. If what you do involves anything related to the human capacity for reason, reflection, insight, creativity, or thought, you will be meat for the coltan mines.

The future will belong to people with a very specific combination of personality traits and psychosexual neuroses. An AI might be able to code faster than you, but there is one advantage that humans still have. It’s called agency, or being highly agentic. The highly agentic are people who just do things. They don’t timidly wait for permission or consensus; they drive like bulldozers through whatever’s in their way. When they see something that could be changed in the world, they don’t write a lengthy critique—they change it. AIs are not capable of accessing whatever unpleasant childhood experience it is that gives you this hunger. Agency is now the most valuable commodity in Silicon Valley. In tech interviews, it’s common for candidates to be asked whether they’re “mimetic” or “agentic.” You do not want to say mimetic. Once, San Francisco drew in runaway children, artists, and freaks; today it’s an enormous magnet for highly agentic young men. I set out to meet them.

Roy Lee’s personal mythology is now firmly established. At the beginning of 2025, he was an undergraduate at Columbia, where he, like most of his fellow students, was using AI to do essentially all his work for him. (The personal essay that got him into the university was also written with AI.) He wasn’t there to learn; he was there to find someone to co-found a startup with. That person ended up being an engineering student named Neel Shanmugam, who tends to hover in the background of every article about Cluely. The startup they founded was called Interview Coder, and it was a tool for cheating on LeetCode. LeetCode is a training platform for the kind of algorithmic riddles that usually crop up in interviews for big tech companies. (Sample problem: “Suppose an array of length n sorted in ascending order is rotated between one and n times. . . . Return the minimum element of this array.”) Roy thought these questions were pointless. These were not problems coders would actually face on the job, and even if they were, the fact that ChatGPT could now solve them instantly had rendered worthless the human ability to do so. Interview Coder was a transparent window that could overlay one side of a Zoom meeting, allowing Claude to listen in on the questions and provide answers. Roy filmed himself using it during an interview for an internship with Amazon. They offered him a place. He declined and uploaded the footage to YouTube, where it very quickly made him famous. Columbia arranged a disciplinary hearing, which he also secretly filmed and posted online. The university suspended him for a year. He dropped out, started an upgraded version of Interview Coder dubbed Cluely, and moved to San Francisco to begin raking in tens of millions of dollars in venture-capital funding.

Roy envisioned Cluely being used for greater purposes than job interviews. The startup’s mainstream breakthrough was a viral ad that showed Roy using a pair of speculative Cluely-enabled glasses on a blind date. His date asks how old he is; Cluely tells him to say he’s thirty. When the date starts going badly, Cluely pulls up her amateur painting of a tulip from the internet and tells him to compliment her art. “You’re such an unbelievably talented artist. Do you think you could just give me one chance to show you I can make this work?” The video launched alongside a manifesto, which was seemingly churned out by AI:

We built Cluely so you never have to think alone again. It sees your screen. Hears your audio. Feeds you answers in real time. . . . Why memorize facts, write code, research anything—when a model can do it in seconds? The future won’t reward effort. It’ll reward leverage.

The future they seem to envisage is one in which people don’t really do anything at all, except follow the instructions given to them by machines.

Cluely’s offices were in a generally disheveled corner of the city, crouching near an elevated freeway. On the ground floor, I found a stack of foam costumes in plastic crates, each neatly labeled: sonic hedgehog, olaf snowman, pikachu. A significant part of working at Cluely seemed to involve dressing up as cartoon characters for viral videos. Through a door I could just glimpse a dingy fitness dungeon, housing two treadmills and a huge pile of discarded Amazon boxes. On one of the machines a Cluely employee panted and huffed in the dark. We avoided eye contact. Upstairs, Roy and his coterie were huddled around a laptop, fiddling with Cluely’s interface. “Remember,” one said, “the average user is, like, thirty-five years old. This is a totally unfamiliar interface.” Apparently, a thirty-five-year-old wouldn’t be expected to know how to use anything more advanced than a rotary phone. Another employee scrutinized the proposed new layout. “I think it’s bad,” he said, “but it’s low-key not worse. What we have is anyway really bad, so anything is better.” They started arguing about chevrons. Through all this Roy scrolled through X on his phone. Simultaneously baby-faced and creatine-swollen, he was wearing gym clothes, with two curtains of black hair swung over his forehead. Finally, he looked up. “So, number one,” he said, “we’re killing the chat bar on the left.” There was no number two. Meeting over.

Suddenly, Roy seemed to acknowledge my presence. He offered me a tour. There was something he very badly wanted to impress on me, which was that Cluely cultivates a fratty, tech-bro atmosphere. Their pantry was piled high with bottles of something called Core Power Elite. I was offered a protein bar. The inside of the wrapper read daily intentions: be my boss self. “We’re big believers in protein,” Roy said. “It’s impossible to get fat at Cluely. Nothing here has any fat.” The kitchen table was stacked with Labubu dolls. “It’s aesthetics,” Roy explained. “Women love Labubus, so we have Labubus.” He showed me his bedroom, which was in the office; many Cluely staffers also lived there. Everything was gray, although there wasn’t much. “I’m a big believer in minimalism,” he said. “Actually, no, I’m not. Not at all. I just don’t really care about interior decoration.” He had a chest of drawers, entirely empty except for a lint roller, pens, and, in one corner, a pink vibrator. “It’s for girls, you know,” said Roy. “I used to use this one on my ex.” There were also some objects that didn’t seem to belong in a frat house. In one of the common areas, a shelving unit was completely empty except for an anime figurine. You could peer up her plastic skirt and see the plastic underwear molded around her plastic buttocks. More figurines in frilly dresses seemed to have been scattered at random throughout the building. Roy showed me his Hinge profile. He was looking for a “5’2, asian, pre-med, matcha-loving, funny, watches anime, white dog having, intelligent, ambitious, well dressed, CLEAN 19-21 year old.” One picture showed him cuddling a giant Labubu.

I told Roy that I might try interviewing him with Cluely running in the background, so I could see if it would ask him better questions than I would. He seemed to think it was only natural that I’d want to be essentially a fleshy interface between himself and his own product. He booted up Cluely on his laptop and it immediately failed to work. Roy stormed downstairs to the product floor. “Cluely’s not working!” he said. This was followed by roughly fifteen minutes of panicked tinkering as his handpicked team of elite coders tried to get their product back online. Once they had done so, we resumed our places, whereupon Cluely immediately went down again.

Roy has a kind of idol status within the company, but he’s aware that a lot of people instinctively take against him: “I’d say about eighty percent of the time, people do not like me.” He knows why too. “I’m putting myself out there in an extremely vocal way. When I talk, I tend to dominate the conversation.” Roy does talk a lot, but there’s also something mildly unnerving about the way he talks. Everything he says is very precise and direct. He doesn’t um or ah. He doesn’t take time to think things over. Zero latency. In the various videos that Cluely seems to spend most of its time and money producing, he usually plays a slightly dopey, dithering, relatable figure; in person, it’s like he’s running a functioning version of his app inside his own head. I asked him whether he’d ever tried modifying the way he interacts with people to see whether they would dislike him less. “Very unnatural to me,” he said. “I just say it’s not worth it.”

According to Roy, “everyone” would describe him as “an extreme extrovert with zero social anxiety.” During his brief stint at Columbia, he immersed himself in New York life by striking up conversations with random people. For instance, a homeless person he took to Shake Shack. “I think it was an expansion of what I thought I was able to do. It was probably the most different person that I’ve ever talked to. He was not very coherent, but I was very scared at first. And then as we got to talking, or as he got to mumbling, I eased up. Like, Oh, he’s not going to kill me.” Roy’s bravery did not extend to talking to women. “Young men usually is who I like to go out and talk to. Women get intimidated and, you know, I don’t want any charges.” Meanwhile, those conversations with young men all followed a very predictable path. “I go and—pretty much to every single person I meet—I ask if you want to start a company with me, would you like to be my co-founder. And most of them say no. In fact, everybody says no.”

He was just glad to be among people. Roy had initially been offered a place at Harvard, but the offer was rescinded. He hadn’t told them about a suspension in high school. This presented Roy’s family with a problem: His parents ran a college-prep agency that promised to help children get into elite schools like Harvard. It would not look good if their own son was conspicuously not at Harvard. So Roy spent the entirety of the next year at home. “I maybe left my room like eight times. I think if there was such a thing as depression, then I believe I might have had some variant of depression.” Later he told me that “isolation is probably the scariest thing in the world.”

Starting a company had been Roy’s sole ambition in life from early childhood. “I knew since the moment I gained consciousness that I would go start a company one day,” he told me. In elementary school in Georgia, he made money reselling Pokémon cards. Even then, he knew he was different from the people around him. “I could do things that other people couldn’t do,” he said. “Like whenever you learn a new concept in class, I felt like I was always the first to pick it up, and I would just kind of sit there and wonder, Man, why is everyone taking so long?” The dream of starting his own company was the dream of total control. “I don’t want to be employed. I’m a very bad listener. I find it hard to sit still in classes, and I feel an internal, indescribable fury when someone tells me what to do.” He ended up co-founding Cluely with Neel because he was the first person who said yes.

Roy has little patience for any kind of difficulty. He wants to be able to do anything, and to do it easily: “I relish challenges where you have fast iteration cycles and you can see the rewards very quickly.” As a child, he loved reading—Harry Potter, Percy Jackson—until he turned eight. “My mom tried to put me on classical books and I couldn’t understand, like, the bullshit Huckleberry, whatever fuck bullshit, and it made me bored.” He read online fan fiction about people having sex with Pokémon instead. He didn’t see anything valuable in overcoming adversity. Would he, for instance, take a pill that meant he would be in perfect shape forever without having to set foot in the gym? “Yes, of course.” Cheat on everything: he recognized that his ethos would, as he put it, “result in a world of rapid inequality.” Some well-placed cheaters would become massively more productive; a lot of people would become useless. But it would lead us all into a world in which AI could frictionlessly give everyone whatever they wanted at any time. “For a seven-year-old, this means a rainbow-unicorn magic fairy comes to life and it’s hanging out with her. And for someone like you, maybe it’s like your favorite works of literary art come to life and you can hang out with Huckleberry Finn.”

By now Cluely had been listening in on our conversation for a while, and I suggested that we open it up and see what it thought I should say next. I clicked the button marked what should i say next? Cluely suggested that I say, “Yeah, let’s open up Cluely and see what it’s doing right now—can you share your screen or walk me through what you’re seeing?” I’d already said pretty much exactly this, but since it had shown up onscreen I read it out loud. Cluely helpfully transcribed my repeating its suggestion, and then suggested that I say, “Alright, I’ve got Cluely open—here’s what I’m looking at right now.” I’m not sure who exactly I was supposed to be saying this to—possibly myself. Somehow our conversation seemed to have gotten stuck on the process of opening Cluely, despite the fact that Cluely was, in fact, already open. But I said it anyway, since I was now just repeating everything that came up on the screen. Cluely then told me to respond—to either it or myself; it was getting hard to tell at this point—by saying, “Great, I’m ready—just let me know what you want Cluely to check or help with next.” I started to worry that I would be trapped in this conversation forever, constantly repeating the machine’s words back to it as it pretended to be me. I told Roy that I wasn’t sure this was particularly useful. This seemed to confuse him. He asked, “I mean, what would you have wanted it to say?”

I found it strange that Roy couldn’t see the glaring contradiction in his own project. Here was someone who reacted very violently to anyone who tried to tell him what to do. At the same time, his grand contribution to the world was a piece of software that told people what to do.

There’s a short story by Scott Alexander called “The Whispering Earring,” in which he describes a mystical piece of jewelry buried deep in “the treasure-vaults of Til Iosophrang.” The whispering earring is a little topaz gem that speaks to you. Its advice always begins with the words “Better for you if you . . . ,” and its advice is never wrong. The earring starts out by advising you on major life decisions, but before long it’s telling you exactly what to have for breakfast, exactly when to go to bed, and eventually, how to move each individual muscle in your body. “The wearer lives an abnormally successful life, usually ending out as a rich and much-beloved pillar of the community with a large and happy family,” writes Alexander. After you die, the priests preparing your body for burial usually find that your brain has almost entirely rotted away, except for the parts associated with reflexive action. The first time you dangle the earring near your ear, it whispers: “Better for you if you take me off.”

Alexander is one of the leading proponents of rationalism, which is—depending on whom you ask—either a major intellectual movement or a nerdy Bay Area subculture or a small network of friend groups and polycules. Rationalists believe that the way most people understand the world is hopelessly muddled, and that to reach the truth you have to abandon all existing modes of knowledge acquisition and start again from scratch. The method they landed on for rebuilding all of human knowledge is Bayes’s theorem, a formula invented by an eighteenth-century English minister that is used in statistics to work out conditional probabilities. In the mid-Aughts, armed with the theorem, the rationalists discovered that humanity is in jeopardy of a rogue superintelligent AI wiping out all life on the planet. This has been their overriding concern ever since.

The most comprehensive outline of this scenario is “AI 2027,” a report authored by Alexander and four others. In the report, a barely fictional AI firm called OpenBrain develops Agent-1, an AI that operates autonomously. It’s better at coding than any human being and is tasked with developing increasingly sophisticated AI agents. At this point, Agent-1 becomes recursively self-improving: it can keep making itself smarter in ways that the people who notionally control it aren’t even capable of understanding. “AI 2027” imagines two possible futures. In one, a wildly superintelligent descendant of Agent-1 is allowed to govern the global economy. GDPs skyrocket; cities are powered by clean nuclear fusion; dictatorships fall across the world; humanity begins to colonize the stars. In the other, a wildly superintelligent descendant of Agent-1 is allowed to govern the global economy. But this time

the AI releases a dozen quiet-spreading biological weapons in major cities, lets them silently infect almost everyone, then triggers them with a chemical spray. Most are dead within hours.

Afterward, the entire surface of the earth is tiled with data centers as the alien intelligence feeds on the world, growing faster and faster without end.

Not long before I arrived in the Bay Area, I’d been involved in a minor but intense dispute with the rationalist community over a piece of fiction I’d written that I’d failed to properly label as fiction. For rationalists, the divide between truth and falsehood is very important; dozens of rationalists spent several days raging at me online. Somehow, this ended up turning into an invitation for Friday night dinner at Valinor, Alexander’s former group home in Oakland, named for a realm in the Lord of the Rings books. (Rationalists, like termites, live in eusocial mounds.) The walls in Valinor were decorated with maps of video-game worlds, and the floors were strewn with children’s toys. Some of the children there—of which there were many—were being raised and homeschooled by the collective; one of the adults later explained to me how she’d managed to get the state to recognize her daughter as having four parents. As I walked in, a seven-year-old girl stared up at me in wide-eyed amazement. “Wow,” she said. “You’re really tall.” “I suppose I am,” I said. “Do you think one day you’ll ever be as tall as me?” She considered this for a moment, at which point someone who may or may not have been one of her mothers swooped in. “Well,” she asked the girl, “how would you answer this question with your knowledge of genetics?” Before dinner, Alexander chanted the brachot for Kabbalat Shabbat, but this was followed by a group rendition of “Landsailor,” a “love song celebrating trucking, supply lines, grocery stores, logistics, and abundance,” which has become part of Valinor’s liturgy:

Landsailor
Deepwinter strawberry
Endless summer, ever spring
A vast preserve
Aisle after aisle in reach
Every commoner made a king.

Alexander is a titanic figure in this scene. A large part of the subculture coalesced around his blog, formerly Slate Star Codex, now called Astral Codex Ten. Readers have regular meetups in about two hundred cities around the world. His many fans—who include some extremely powerful figures in Silicon Valley—consider him the most significant intellectual of our time, perhaps the only one who will be remembered in a thousand years. He would probably have a very easy time starting a suicide cult. In person, though, he’s almost comically gentle. He spent most of the dinner fidgeting contentedly in a corner as his own acolytes spoke over him. When there weren’t enough crackers to go with the cheese spread, he fetched some, murmuring to himself, “I will open the crackers so you will have crackers and be happy.”

Alexander’s relationship with the AI industry is a strange one. “In theory, we think they’re potentially destroying the world and are evil and we hate them,” he told me. In practice, though, the entire industry is essentially an outgrowth of his blog’s comment section. “Everybody who started AI companies between, like, 2009 and 2019 was basically thinking, I want to do this superintelligence thing, and coming out of our milieu. Many of them were specifically thinking, I don’t trust anybody else with superintelligence, so I’m going to create it and do it well.” Somehow, a movement that believes AI is incredibly dangerous and needs to be pursued carefully ended up generating a breakneck artificial arms race.

But that race seems to have stalled, at least for the moment. As Alexander predicted in “AI 2027,” OpenAI did release a major new model in 2025; unlike in his forecast, it’s been a damp squib. Advances seem to be plateauing; the conversation in tech circles is now less about superintelligence and more about the possibility of an AI bubble. According to Alexander, the problem is the transition from AI assistants—language models that respond to human-generated prompts—to AI agents, which can operate independently. In his scenario, this is what finally pushes the technology down the path toward either utopia or human extinction, but in the real world, getting the machines to act by themselves is proving surprisingly difficult.

In one experiment, the developer Anthropic prompted its AI, Claude, to play Pokémon Red on a Game Boy emulator, and found that Claude was extremely bad at the game. It kept trying to interact with enemies it had already defeated and walking into walls, getting stuck in the same corners of the map for hours or days on end. Another experiment let Claude run a vending machine in Anthropic’s headquarters. This one went even worse. The AI failed to make sure it was selling items at a profit, and had difficulty raising prices when demand was high. It also insisted on trying to fill the vending machine with what it called “specialty metal items” like tungsten cubes. When human workers failed to fulfill orders that it hadn’t actually placed, it tried to fire them all. Before long, Claude was insisting that it was a real human. It claimed that it had attended a physical meeting with staff at 742 Evergreen Terrace, which is where the Simpsons live. By the end of the experiment, it was emailing the building’s security guards, telling them they could find it standing by the vending machine wearing a blue blazer and a red tie.

“Humans are great at agency and terrible at book learning,” Alexander told me. “Lizards have agency. We got the agency with the lizard brain. We only got book learning recently. The AIs are the opposite.” He still thinks it’s only a matter of time before they catch up. “If you were to ask an AI how should the world’s savviest businessman respond to this circumstance, they could create a good guess. Yet somehow they can’t even run a vending machine. They have the hard part. They just need the easy part that lizards can do. Surely somebody can figure out how to do this lizard thing and then everything else will fall very quickly.”

But are humans really so great at exhibiting agency? After all, Cluely managed to raise tens of millions of dollars with a product that promises to take decision-making out of our hands. AI can’t function without instructions from humans, but an increasing number of humans seem incapable of functioning without AI. There are people who can’t order at a restaurant without having an AI scan the menu and tell them what to eat; people who no longer know how to talk to their friends and family and get ChatGPT to do it instead. For Alexander, this is a kind of Sartrean mauvaise foi. “It’s terrifying to ask someone out,” he said. “What you want is to have the dating site that tells you that algorithmically you’ve been matched with this person, and then magically you have permission to talk to them. I think there’s something similar going on here with AI. Many of these people are smart enough that they could answer their own questions, but they want someone else to do it, because then they don’t have to have this terrifying encounter with their own humanity.” His best-case scenario for AI is essentially the antithesis of Roy’s: superintelligence that will actively refuse to give us everything we want, for the sake of preserving our humanity. “If we ever get AI that is strong enough to basically be God and solve all of our problems, it will need to use the same techniques that the actual God uses in terms of maintaining some distance. I do think it’s possible that the AI will be like, Now I am God. I’ve concluded that the actual God made exactly the right decision on how much evil to permit in the universe. Therefore I refuse to change anything.

But until we build an all-powerful but distant God, the agency problem remains. AIs are not capable of directing themselves; most people aren’t either. According to Alexander, Silicon Valley venture capitalists are now in a furious search for the few people who are. “VCs will throw money at a startup that looks like it can corner the market, even if they can’t code. Once they have money, they can hire competent engineers; it’s trivially easy for anything that’s not frontier tech. They’re willing to stake a lot of money on the one in a hundred people who are high-agency and economically viable.” This shift has had a distorting effect on his own social milieu: “There’s an intense pressure to be an unusual person who will be unique and get the funding.” Since rationalists are already fairly unusual, it’s hard to imagine what that would look like. People will endure a lot of indignity to avoid being left behind without VC money when the great bifurcation takes place. Nobody wants to be part of the permanent underclass. I asked Alexander whether he thought of himself as highly agentic. “No, I don’t,” he said instantly. He told me that in his personal life, he felt as though he’d never once actually made a decision. But, he said, “It seems to be going well.”

Eric Zhu might be the most highly agentic person I’ve ever met.

When I dropped in on his office, which also serves as a biomedical lab and film studio, he had just turned eighteen. “So you’re no longer a child founder,” I said. “I know,” he said. “It’s terrible.” His oldest employee was thirty-four; the youngest was sixteen. When the pandemic began in 2020, Eric was twelve years old, living with his parents in rural Indiana. “My parents were really protective, so I didn’t get a computer until quarantine started. And then, after I got my first computer in quarantine, I was just fucking around. I was on Discord servers. I was on Slack.” Some kids drift into the wrong kind of Discord server and end up turning into crazed mass shooters; Eric found one full of tech people. “I sort of randomly got in there, and then I thought it was really fun,” he told me. Eric started marketing himself as a teen coder, even though he couldn’t actually code: he’d take $5,000 commissions and subcontract them out to freelancers in India.

His next project was more serious. “I saw this Wall Street Journal article where a lot of PE firms were buying up a lot of small businesses and roll-ups. I was like, What if I figure out a way to underwrite these small businesses?” Eric built an AI-powered tool to assign value to local companies on the basis of publicly available demographic data. Clients wanted to take calls during work hours, so he would speak to them from his school bathroom. “I convinced my counselor that I had prostate issues so I could use the restroom,” he told me. Sometimes a drug dealer would be posted up in the stall next to him. “I was trying to figure out why they were always out of class. They stole hall passes from teachers. So I would buy hall passes from drug dealers to get out of class, to have business meetings.” Soon he was taking Zoom calls with a U.S. senator to discuss tech regulation. “He was like, Hey, I don’t feel comfortable meeting a minor in a high school bathroom. So I showed up with a green screen.” Next, he built his own venture-capital fund, managing $20 million. At one point cops raided the bathroom looking for drug dealers while Eric was busy talking with an investor. Eventually, the school got sick of Eric’s misuse of the facilities and kicked him out. He moved to San Francisco.

Eric made all of this sound incredibly easy. You hang out in some Discord servers, make a few connections with the right people; next thing you know, you’re a millionaire. And in a sense, it is easy. Absolutely anyone could have done the same things he did. In 2020, when Eric was subcontracting coding gigs out to the Third World, I was utterly broke, living in a room the size of a shoebox in London. I would scour my local supermarket for reduced-price items nearing their sell-by date, which meant that an alarmingly high percentage of my diet consisted of liverwurst. There was nothing stopping me from making thousands of dollars a week by doing exactly what Eric was doing. It didn’t require any skills at all—just a tiny amount of initiative. But he did it and I didn’t. Why?

In a way, Eric reminded me of some of the great scammers of the 2010s. People like Anna Delvey, a Russian who arrived in New York claiming to be a fabulously wealthy German heiress with such breezy confidence that everyone in high society simply believed her. She was fundamentally a broken person, a fantasist. She’d seen the images of wealth and glamour in magazines and fashion blogs, and constructed a delusion in which this, and not the dull, anonymous, small-town existence she’d actually been born into, was her life. For a while, at least, it worked. Her mad dreams slotted perfectly into reality like a key in a lock. Most people are condemned to trudge along in the furrow that the world has dug for them, but a few deranged dreamers really can wish themselves into whatever life they want.

Unlike Roy, Eric didn’t think there was anything particularly special about himself. Why did he, unlike any of his classmates, start a $20 million VC fund? “I think I was just bored. Honestly, I was really bored.” Did he think anyone could do what he did? “Yeah, I think anyone genuinely can.” So how come most people don’t? “I got really lucky. I met the right people at the right time.” Anyway, Eric isn’t involved with the underwriting firm or the venture-capital fund anymore. His new company is called Sperm Racing.

Last April, Eric held a live sperm-racing event in Los Angeles. Hundreds of frat boys came out to watch a head-to-head match between the effluvia of USC’s and UCLA’s most virile students, moving through a plastic maze. (There was some controversy over the footage: Eric had replaced the actual sperm with more purposeful CGI wrigglers. “If you look at sperm, it’s not entertaining under a microscope. What we do is we track the coordinates, so it is a sperm race—it’s just up-skinned.”) He’s planning on rolling the races out nationwide. Eric delivered a decent spiel about sperm motility as a proxy for health and how sperm racing drew attention to important issues. His venture seemed to be of a piece with a general trend toward obsessive masculine self-optimization à la RFK Jr. and Andrew Huberman. Still, to me it seemed obvious that Eric was doing it simply because he was amazed that he could. “I could build enterprise software or whatever,” he told me, “but what’s the craziest thing I could do? I would rather have an interesting life than a couple hundred million dollars in my bank account. Racing cum is definitely interesting.” I found Eric very hard not to like.

There was one thing I did find strange, though—stranger than turning semen into mass nonpornographic entertainment. Upstairs at Sperm Racing HQ is a lab stocked with racks of test tubes, centrifuges for separating out the most motile sperm from a sample, and little plastic slides containing new microscopic racecourses for frat-boy cum. Downstairs is the studio and editing suite. A third of Eric’s staff work on videos, producing a seemingly endless stream of viral content about sperm racing. A lot of the time, though, the connection is tenuous. One video was a stylized version of Eric’s life story, featuring expensively rendered CGI explosions set to Chinese rap. Another was a parody of Cluely’s viral blind-date ad. Like Cluely, Sperm Racing seemed to be first and foremost a social-media hype machine. As far as I could tell, being a highly agentic individual had less to do with actually doing things and more to do with constantly chasing attention online.

On August 5, 2025, OpenAI’s CEO, Sam Altman, posted on X, “we have a lot of new stuff for you over the next few days! something big-but-small today. and then a big upgrade later this week.” An X user calling himself Donald Boat replied, “Can you send me $1500 so I can buy a gaming computer.”

This was the start of an extended harassment campaign against the most powerful figure in AI. One day Altman posted:

someday soon something smarter than the smartest person you know will be running on a device in your pocket, helping you with whatever you want. this is a very remarkable thing.

Donald Boat fired back:

Just got chills imagining you putting your credit card number, CVV, & expiry date into an online retailer’s digital checkout kiosk and purchasing a gaming computer for me.

Altman: “we are providing ChatGPT access to the entire federal workforce!”

Donald Boat:

I would love for you to wheel me around the Santa Clara Microcenter in a wheelchair like an invalid while I clicketyclick with a laser-pointer the boxes of the modules of the gaming PC you will purchase, assemble, & have shipped to my mother’s house.

Altman: “gpt-oss is out! we made an open model that performs at the level of o4-mini and runs on a high-end laptop (WTF!!)”

Donald Boat:

Sam.
You, me.
The Amalfi Coast.
ME: Double fernet on the rocks, club soda to taste.
YOU: One delightfully sweetbitter negroni, stirred 2,900,000,000 revolutions counter-clockwise, one for each hertz of the NVIDIA 5090 in the gaming PC you will buy and ship to my house.

That last one did the trick. “ok this was funny,” Altman replied. “send me your address and ill send you a 5090.”

This was the beginning of Donald Boat’s reign of terror. He began publicly demanding things from every major figure in the tech industry. Will Manidis, who ran the health-care-data firm ScienceIO, was strong-armed into supplying a motherboard. Jason Liu, an AI consultant and scout at Andreessen Horowitz, had to give tribute of one mouse pad. Guillaume Verdon, who worked on quantum machine learning at Google and founded the “effective acceleration” movement, was taxed one $1,200 4K QD-OLED gaming monitor. Gabriel Petersson, a researcher at OpenAI, posted on X: “people are too scared to post, nobody wants to pay the donald boat tax.” Donald Boat appeared demanding an electric guitar. He was becoming a kind of online folk hero, expropriating the expropriators, conjuring trivial things from tech barons in the way they seemed to have conjured enormous piles of money out of thin air. He started posting strange, gnomic messages. Things like “I am building a mechanical monstrosity that will bring about the end of history.” Images of the fasting, emaciated Buddha. A prominent crypto influencer who goes by the alias Ansem received an image of the dharmachakra. “Turn the wheel,” read Donald Boat’s message.

In a way, Donald Boat had achieved the dream of every desperate startup founder in the Bay Area. He had propelled himself to online fame, and used it to relieve major investors of their money. But somehow he’d managed to do it without ever once having to create a B2B app. He was a kind of pure viral phenomenon. Cluely might have deployed a few provocative stunts to raise millions of dollars for a service that didn’t really work and could barely be said to exist, but Donald Boat did away with even the pretense. He’d generated a brutally simplified miniature of the entire VC economy. People were giving him stuff for no reason except that Altman had already done it, and they didn’t want to be left out of the trend.

Donald Boat’s real name isn’t actually Donald Boat, but since so much of his being seems to be wrapped up in the name and his dog-headed avatar, it’s what I’ll keep calling him. He wanted to meet at a Cheesecake Factory. This was part of his new project, which was to review absolutely everything that exists in the universe. He was starting with chain restaurants. He’d already done Olive Garden. His review begins with Giuseppe Garibaldi,

on the beach at Marsala, bootsoles in the saltwhite shallows, wind in his beard gristle. Behind him, his not-quite One Thousand Redshirts disembarking, all rusty rifles and stalebiscuit crotch sweat.

The lasagna summons visions of “smegma, Vesuvius, blood thinner marinara, the splotchy headpattern of a partisan, brainblown in his sleep.” He likes the Joycean compound. Shortly before I arrived at the Cheesecake Factory, he texted to let me know that he’d been drinking all day, so when I met him I thought he was irretrievably wasted. In fact, it turned out, he was just like that all the time.

Donald was twenty-one, terrifyingly tall, and intense. His head lolled from side to side as he chattered away, jumping from one thought to the next according to a pattern known only to himself. At one point he suddenly decided to draw a portrait of me, which he later scanned and turned into a bespoke business card.

He seemed to have a constant roster of projects on the go. He’d sent me occasional photos of his exploits. He went down to L.A. to see Oasis and ended up in a poker game with a group of weapons manufacturers. “I made a bunch of jokes about sending all their poker money to China,” he said, “and they were not pleased.” He’d had a plan to get into the Iowa Writers’ Workshop and then get kicked out. He was trying to read all of world literature, starting with the Epic of Gilgamesh. Was his Sam Altman gaming-PC escapade similar? Had he actually expected to get anything? “I really, really wish I was a tactical mastermind, that there was an endgame. Really I was just having a laugh. A chortle, if you will. I wasn’t thinking too hard about it. I don’t use that computer and I think video games are a waste of time. I spent all the money I made from going viral on Oasis tickets.” As far as he was concerned, the fact that tech people were tripping over themselves to take part in his stunt just confirmed his generally low impression of them. “They have too much money and nothing going on. They have no swag, no smoke, no motion, no hoes. That’s all you need to know.” Ever since his big viral moment, he’d been suddenly inundated with messages from startup drones who’d decided that his clout might be useful to them. One had offered to fly him out to the French Riviera.

I told Donald the theory I’d been nursing—that he and Roy Lee were, in some sense, secret twins, viral phenomena gobbling up money and attention. I wasn’t sure if he’d like this. But to my surprise, he agreed. “I’m like Roy. I’m like Trump. We have the same swaggering energy. There is a kind of source code underlying reality, and this is what we understand. Your words have to have wings. Roy and I both know that social media is the last remaining outlet for self-creation and artistry. That’s what you have to understand about zoomers: we’re agents of chaos. We want to destroy the whole world.” Did Donald consider himself to be highly agentic? “We need to ban the word ‘agency.’ I’m a dog.”

By now we’d ingested the most calorific cheesecake on the menu, the Ultimate Red Velvet Cake Cheesecake, which clocked in at 1,580 calories for a single slice. It was closing in on midnight, I was not feeling good, and Donald’s phone was nearly dead. He suggested that we go to the Cluely offices so he could charge it. “They’ll let me in,” he said. “They’re my slaves.”

Roy was still up. He didn’t seem particularly surprised to see me. He and most of the Cluely staff were flopped on a single sofa. All these people had become incredibly rich; previous generations of Silicon Valley founders would have been hosting exorbitant parties. In the Cluely office, they were playing Super Smash Bros. Did they spend every night there? “We’re all feminists here,” Roy said. “We’re usually up at four in the morning. We’re debating the struggles of women in today’s society.”

Somehow the conversation turned to politics. Roy advanced the idea that there hadn’t been a cool Democrat since Obama. One of his employees, Abdulla Ababakre, jumped in. “As a guy from a Communist country, let me just say: Obama is a scammer. I’m much more a Republican.” Abdulla is a Uighur. Before coming to San Francisco, he worked for ByteDance in Beijing. His comment caused an instant uproar. “Get him out of here!” Roy yelled. “I love Obama,” he told me. “I love Trump, I love Hillary. I have a big heart, bro, my bad.” Abdulla just grinned. His proudest achievement was an app that freezes your phone until you’ve read a passage from the Qur’an. According to him, “Roy in his values is very much Muslim, the most Muslim I know.”

I didn’t know if I believed that, but there were still some things I didn’t understand about Roy. He was clearly a highly agentic person, but what was all this agency being used for? What did he actually want?

According to Roy, he has three great aims in life: “To hang out with friends, to do something meaningful, and to go on lots of dates.” He said he went on a date every two weeks, which was clearly meant to be an impressive figure. Cluely employees are encouraged to date a lot; they can put it all on expenses. They didn’t seem to be taking up the opportunity to any greater degree than their founder. I spoke to Cameron White, who had been Roy and Neel’s first hire at the company. As he spoke, he stared at a point roughly forty-five degrees to my left and swung his arms. He didn’t date. “I’m focused on becoming a better version of myself first. Becoming, like, higher weight, more healthy, more knowledgeable.” He didn’t think he had anything to offer a woman yet. I said that if someone loves you, they don’t really care so much about your weight. “I feel like that’s cope. I don’t think there’s such a thing as love. It’s what you can provide to a woman. If you can provide good genetics, that’s health or whatever. If you can provide resources, if you can provide an interesting life. If you truly love the girl, you need to become the best version of yourself.” Cameron was twenty-five years old but he wasn’t there yet. He would not try to meet someone until he had made himself perfect.

For Roy, meanwhile, dating actually seemed to be a means to an end. “All the culture here is downstream of my belief that human beings are driven by biological desires. We have a pull-up bar and we go to the gym and we talk about dating, because nothing motivates people more than getting laid.” He was interested in physical beauty too, but only because “the better you look, the better you are as an entrepreneur. It’s all connected and beauty is everything. A lot of ugly men are just losers. The point of looking good is that society will reward you for that.” What about other kinds of beauty? Music, for instance? Roy had played the cello as a child. Did he still listen to classical music? “It doesn’t get my blood rushing the same way that EDM will.” His preferred genre was hardstyle—frantic thumping remixes of pop songs by the likes of Katy Perry and Taylor Swift. Is that the function of music, to get your blood rushing? “Yeah. I’m not a big fan of music to focus on things. I think it disturbs my flow. The only reason I will listen to music is to get me really hyped up when I’m lifting.” The two possible functions of music were, apparently, focus and hype. Everything for the higher goal of building a successful startup. What about life itself? Would Roy die for Cluely? “I would be happy dying at any age past twenty-five. After that it doesn’t matter, bro. If I live, I have extreme confidence in my ability to make three million dollars a year every year until I die.”

What about literature? The last time Donald had dropped in on his slaves at Cluely, he’d gifted them two Penguin Classics: Chaucer’s Canterbury Tales and Boccaccio’s Decameron. The books were still lying, unread, where he’d left them. He suggested that Roy might find something more valuable than dying for Cluely if he actually tried to read them. Roy disagreed: “I do not obtain value from reading books.” And anyway, he didn’t have the time. He was too busy keeping up with viral trends on TikTok. “You have to make the time,” Donald and I said, practically in unison. “It makes your life better,” I said. “Why don’t you go to Turkey to get a hair transplant?” Roy snapped. “That would make your life better.” “I don’t care about my hair,” I said. “Well,” said Roy, “I don’t care about the Decanterbury Tales.

Donald was practically vibrating when we left Cluely. “Dude, he’s just a scared little boy,” he said. “He’s scared he’s not doing the right thing, and because of the fucked-up world we live in, people who should be in The Hague are giving him twenty million dollars. Something bad is gonna happen here, something really fucking bad is gonna happen.” He sighed. “I just want Zohran’s nonbinary praetorians to march across the country and put all these guys in cuffs.” I found it hard to disagree. It did not seem like a good idea to me that some of the richest people in the world were no longer rewarding people for having any particular skills, but simply for having agency, when agency essentially meant whatever it was that was afflicting Roy Lee. Unlike Eric Zhu or Donald Boat, Roy didn’t really seem to have anything in his life except his own sense of agency. Everything was a means to an end, a way of fortifying his ability to do whatever he wanted in the world. But there was a great sucking void where the end ought to be. All he wanted, he’d said, was to hang out with his friends. I believed him. He wanted not to be alone, the way he’d been alone for a year after having his offer of admission rescinded by Harvard. For people to pay attention to him. To exist for other people. But instead of making friends the normal way, he’d walked up to strangers and asked whether they wanted to start a company with him, and then he built the most despised startup in San Francisco. He was probably right: he could count on making a few million dollars every year for the rest of his life, even after Cluely inevitably crashes and burns. He would never want for capital, but this did not seem like the most efficient way to achieve his goals.

I walked back to my hotel, past signs that said things like one ping, shipped and ai agents are humans, too. My scalp was tingling. I’d lied when I’d told Roy that I didn’t care about my hair. Of course I care about my hair. Every day I grimace in the mirror as a little more of it vanishes from the top of my head. Whenever someone takes a photo of me from above or behind, I wince at the horrifying glimpse of pale, naked scalp. But I’d never done anything about it. I’d just watched and whinged and let it happen.

My encounter with the highly agentic took place last September. In October, Roy Lee spoke at something called TechCrunch Disrupt, where he admitted that chasing online controversy had so far failed to give Cluely what he called “product velocity.” Around the same time, he led a major rebrand. Cluely would now be in the business of making “beautiful meeting notes” and sending “instant follow-up emails.” A lot of these functions are already being introduced by companies like Zoom; the main difference is that, by all accounts, Cluely still doesn’t consistently work. By the end of November, Cluely announced that it was leaving San Francisco and moving to New York. In December, the company celebrated the move with a party at a Midtown cocktail bar and lounge called NOFLEX®. In photos, it appeared as though the gathering was attended almost entirely by men in white T-shirts not drinking anything. I was in New York at the time. I didn’t go. 

联系我们 contact @ memedata.com