It is 1995.
Your computer modem screeches as it tries to connect to something called the internet. Maybe it works. Maybe you try again.
For the first time in history, you can exchange letters with someone across the world in seconds. Only 2000-something websites exist, so you could theoretically visit them all over a weekend. Most websites are just text on gray backgrounds with the occasional pixelated image. Loading times are brutal. A single image takes a minute, a 1-minute video could take hours. Most people do not trust putting their credit cards online. The advice everyone gives: don’t trust strangers on the internet.
People split into two camps very soon.
Optimists predict grand transformations. Some believe digital commerce will overtake physical retail within years. Others insist we’ll wander around in virtual reality worlds.
“I expect that within the next five years more than one in ten people will wear head-mounted computer displays while traveling in buses, trains, and planes.” - Nicholas Negroponte, MIT Professor, 1993
Pessimists call the internet a fad and a bubble.
If you told someone in 1995 that within 25 years, we’d consume news from strangers on social media over newspapers, watch shows on-demand in place of cable TV, find romantic partners through apps more than through friends, and flip “don’t trust strangers on the internet” so completely that we’d let internet strangers pick us up in their personal vehicles and sleep in their spare bedrooms, most people would find that hard to believe.
We’re in 1995 again. This time with Artificial Intelligence.
And both sides of today’s debate are making similar mistakes.
One side warns that AI will eliminate entire professions and cause mass unemployment within a couple of years. The other claims that AI will create more jobs than it destroys. One camp dismisses AI as overhyped vaporware destined for a bubble burst, while the other predicts it will automate every knowledge task and reshape civilization within the decade.
Both are part right and part wrong.
Geoffrey Hinton, who some call the Father of AI, warned in 2016 that AI would trigger mass unemployment. “People should stop training radiologists now,” he declared, certain that AI would replace them within years.
Yet as Deena Mousa, a researcher, shows in “The Algorithm Will See You Now,” AI hasn’t replaced radiologists, despite predictions. It is thriving.
In 2025, American diagnostic radiology residency programs offered a record 1,208 positions across all radiology specialties, a four percent increase from 2024, and the field’s vacancy rates are at all-time highs. In 2025, radiology was the second-highest-paid medical specialty in the country, with an average income of $520,000, over 48 percent higher than the average salary in 2015.
Mousa identifies a few factors for why the prediction failed - real-world complexity, the job involves more than image recognition, and regulatory/insurance hurdles. Most critical she points is Jevons Paradox, which is the economic principle that a technological improvement in resource efficiency leads to an increase in the total consumption of that resource, rather than a decrease. Her argument is that as AI makes radiologists more productive, better diagnostics and faster turnaround at lower costs mean more people get scans. So employment doesn’t decrease. It increases.
This is also the Tech world’s consensus. Microsoft CEO Satya Nadella agrees, as does Box CEO Aaron Levie, who suggests:
“The least understood yet most important concept in the world is Jevons Paradox. When we make a technology more efficient, demand goes well beyond the original level. AI is the perfect example of this—almost anything that AI is applied to will see more demand, not less.”
They’re only half right.
First, as Andrej Karpathy, the computer scientist who coined the term vibe coding, points out, radiology is not the right job to look for initial job displacements.
“Radiology is too multi-faceted, too high risk, too regulated. When looking for jobs that will change a lot due to AI on shorter time scales, I’d look in other places - jobs that look like repetition of one rote task, each task being relatively independent, closed (not requiring too much context), short (in time), forgiving (the cost of mistake is low), and of course automatable giving current (and digital) capability. Even then, I’d expect to see AI adopted as a tool at first, where jobs change and refactor (e.g. more monitoring or supervising than manual doing, etc).”
Second, the tech consensus that we will see increased employment actually depends on the industry. Specifically, how much unfulfilled demand can be unlocked in that industry, and whether this unfulfilled demand growth outpaces continued automation and productivity improvement.
To understand this better, look at what actually happened in three industries over a 200-year period from 1800 to 2000. In the paper Automation and jobs: when technology boosts employment, James Bessen, an economist, shows the employment, productivity, and demand data for textile, iron & steel, and motor vehicle industries.
After automation, both textile and iron/steel workers saw employment increase for nearly a century before experiencing a steep decline. Vehicle manufacturing, by contrast, holds steady and hasn’t seen the same steep decline yet.
To answer why those two industries saw sharp declines but motor vehicle manufacturing did not, first look at the productivity of workers in all three industries:
Then look at the demand across those three industries:
What the graphs show is a consistent pattern (note: the productivity and demand graphs are logarithmic, meaning productivity and demand grew exponentially). Early on, a service or product is expensive because many workers are needed to produce it. Most people can’t afford it or use them sparingly. For example, in the early 1800s, most people could only afford a pair of pants or shirt. Then automation makes workers dramatically more productive. A textile worker in 1900 could produce fifty times more than one in 1800. This productivity explosion crashes prices, which creates massive new demand. Suddenly everyone can afford multiple outfits instead of just one or two. Employment and productivity both surge (note: employment growth masks internal segment displacement and wage changes. See footnote)
Once demand saturates, employment doesn’t further increase but holds steady at peak demand. But as automation continues and workers keep getting more productive, employment starts to decline. In textiles, mechanization enabled massive output growth but ultimately displaced workers once consumption plateaued while automation and productivity continued climbing. We probably don’t need infinite clothing. Similarly, patients will likely never need a million radiology reports, no matter how cheap they become and so radiologists will eventually hit a ceiling. We don’t need infinite food, clothing, tax returns, and so on.
Motor vehicles, in Bessen’s graphs, tell a different story because demand remains far from saturated. Most people globally still don’t own cars. Automation hasn’t completely conquered manufacturing either (Tesla’s retreat from full manufacturing automation proves the current technical limits). When both demand and automation potential remain high, employment can sustain or even grow despite productivity gains.
Software presents an even more interesting question. How many apps do you need? What about software that generates applications on demand, that creates entire software ecosystems autonomously? Until now, handcrafted software was the constraint. Expensive software engineers and their labor costs limited what companies could afford to build. Automation changes this equation by making those engineers far more productive. Both consumer and enterprise software markets suggest significant unmet demand because businesses have consistently left projects unbuilt. They couldn’t justify the development costs or had to allocate limited resources to their top priority projects. I saw this firsthand at Amazon. Thousands of ideas went unfunded not because they lacked business value, but because of the lack of engineering resources to build them. If AI can produce software at a fraction of the cost, that unleashes enormous latent demand. The key question then is if and when that demand will saturate.
So to generalize, for each industry, employment hinges on a race between two forces:
The magnitude and growth of unmet market demand, and
Whether that demand growth outpaces productivity improvements from automation.
Different industries will experience different outcomes depending on who’s winning that demand and productivity race.
The second debate centers on whether this AI boom is a bubble waiting to burst.
The dotcom boom of the 1990s saw a wave of companies adding “.com” to their name to ride the mania and watch their valuations soar. Infra companies poured billions into fiber optics and undersea cables - expensive projects only possible because people believed the hype. All of this eventually burst in spectacular fashion in the dotcom crash in 2000-2001. Infrastructure companies like Cisco briefly became the most valuable in the world only to come tumbling down. Pets.com served as the poster child of this exuberance raising $82.5 million in its IPO, spending millions on a Super Bowl ad only to collapse nine months later.
But the dotcom bubble also got several things right. More importantly, it eventually bought us the physical infrastructure that made YouTube, Netflix, and Facebook possible. Sure, companies like Worldcom, NorthPoint, and Global Crossing making these investments went bankrupt, but they also laid the foundation for the future. Although the crash proved the skeptics right in the short term, it proved the optimists were directionally correct in the long term.
Today’s AI boom shows similar exuberance. Consider the AI startup founded by former OpenAI executive Mira Murati, which raised $2 billion at a $10 billion valuation, the largest seed round in history. This despite having no product and declining to reveal what it’s building or how it will generate returns. Several AI wrappers have raised millions in seed funding with little to no moat.
Yet some investments will outlast the hype and will likely help future AI companies even if this is a bubble. For example, the annual capital expenditures of Hyperscalers that have more than doubled since ChatGPT’s release - Microsoft, Google, Meta, and Amazon are collectively spending almost half a trillion dollars on data centers, chips, and compute infrastructure. Regardless of which specific companies survive, this infrastructure being built now will create the foundation for our AI future - from inference capacity to the power generation needed to support it.
The infrastructure investments may have long-term value, but are we already in bubble territory? Azeem Azhar, a tech analyst and investor, provides an excellent practical framework to answer the AI bubble question. He benchmarks today’s AI boom using five gauges: economic strain (investment as a share of GDP), industry strain (capex to revenue ratios), revenue growth trajectories (doubling time), valuation heat (price-to-earnings multiples), and funding quality (the resilience of capital sources). His analysis shows that AI remains in a demand-led boom rather than a bubble, but if two of the five gauges head into red, we will be in bubble territory.
The demand is real. After all OpenAI is one of the fastest-growing companies in history. But that alone doesn’t prevent bubbles. OpenAI will likely be fine given its product-market fit, but many other AI companies face the same unit economics questions that plagued dotcom companies in the 1990s. Pets.com had millions of users too (a then large portion of internet users), but as the tech axiom goes, you can acquire infinite customers and generate infinite revenue if you sell dollars for 85 cents. So despite the demand, the pattern may rhyme with the 1990s. Expect overbuilding. Expect some spectacular failures. But also expect the infrastructure to outlast the hype cycle and enable things we can’t yet imagine.
So where does this leave us?
We’re early in the AI revolution. We’re at that metaphorical screeching modem phase of the internet era. Just as infrastructure companies poured billions into fiber optics, hyperscalers now pour billions into compute. Startups add “.ai” to their names like companies once added “.com” as they seek higher valuations. The hype will cycle through both euphoria and despair. Some predictions will look laughably wrong. Some that seem crazy will prove conservative.
Different industries will experience different outcomes. Unlike what the Jevons optimists suggest, demand for many things plateaus once human needs are met. Employment outcomes in any industry depend on the magnitude and growth of unmet market demand and whether that demand growth outpaces productivity improvements from automation.
Cost reduction will unlock market segments. Aswath Damodaran, a finance professor, (in)famously undervalued Uber assuming it would only capture a portion of the existing taxi market. He missed that making rides dramatically cheaper would expand the market itself as people took Ubers to destinations they’d never have paid taxi prices to reach. AI will similarly enable products and services currently too expensive to build with human intelligence. A restaurant owner might use AI to create custom supply chain software that say at $100,000 with human developers would never have been built. A non-profit might deploy AI to contest a legal battle that was previously unaffordable.
We can predict change, but we can’t predict the details. No one in 1995 predicted we’d date strangers from the internet, ride in their ubers, or sleep in their airbnbs. Or that a job called influencers would become the most sought-after career among young people. Human creativity generates outcomes we can’t forecast with our current mental models. Expect new domains and industries to emerge. AI has already helped us decode more animal communication in the last five years than in the last fifty. Can we predict what jobs a technology that allows us to have full-blown conversations with them will unlock? A job that doesn’t exist today will likely be the most sought-after job in 2050. We can’t name it because it hasn’t been invented yet.
Job categories will transform. Even as the internet made some jobs obsolete, it also transformed them and created new categories. Expect the same with AI. Karpathy ends with a question:
About 6 months ago, I was also asked to vote if we will have less or more software engineers in 5 years. Exercise left for the reader.
To answer this question, go back to 1995 and ask the same question but with journalists. You might have predicted more journalists because the internet would create more demand by enabling you to reach the whole world. You’d be right for 10 or so years as employment in journalism grew until the early 2000s. But 30 years later, the number of newspapers and the number of journalists both have declined, even though more “journalism” happens than ever. Just not by people we call journalists. Bloggers, influencers, YouTubers, and newsletter writers do the work that traditional journalists used to do.
The same pattern will play out with software engineers. We’ll see more people doing software engineering work and in a decade or so, what “software engineer” means will have transformed. Consider the restaurant owner from earlier who uses AI to create custom inventory software that is useful only for them. They won’t call themselves a software engineer.
So just like in 1995, if the AI optimists today say that within 25 years, we’d prefer news from AI over social media influencers, watch AI-generated characters in place of human actors, find romantic partners through AI matchmakers more than through dating apps (or perhaps use AI romantic partners itself), and flip “don’t trust AI” so completely that we’d rely on AI for life-or-death decisions and trust it to raise our children, most people would find that hard to believe. Even with all the intelligence, both natural and artificial, no one can predict with certainty what our AI future will look like. Not the tech CEOs, not the AI researchers, and certainly not some random guy pontificating on the internet. But whether we get the details right or not, our AI future is loading.