(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=43957010

Hacker News上的一篇讨论探讨了贝尔实验室成功的原因及其衰落的原因,重点关注给予研究人员的自由和丰富的资源。评论者们讨论了“浪费”资源以培养创造力和革命性思想的必要性。一些人认为,普遍基本收入 (UBI) 可以通过允许个人追求自己的热情来释放类似的潜力,但也有反驳意见指出其经济可行性和潜在的滥用问题。 讨论涉及到其他模式,例如NSF资助、谷歌大脑以及开源项目的潜力。讨论还强调了平衡研究与实际应用的重要性,并提到了洛克希德的臭鼬工厂和IBM研究院。一些评论者认为,聪明人现在更喜欢独立工作,而另一些人则相信共享知识环境的力量。讨论还涉及到战争时期动机及其对合作研究的影响的历史背景。最终,该讨论探讨了在当今风险规避、指标驱动的环境中复制贝尔实验室创新条件的挑战。

相关文章
  • (评论) 2025-05-08
  • (评论) 2025-05-10
  • (评论) 2025-05-07
  • (评论) 2025-05-09
  • (评论) 2025-05-09

  • 原文
    Hacker News new | past | comments | ask | show | jobs | submit login
    Why Bell Labs Worked (1517.substack.com)
    127 points by areoform 7 hours ago | hide | past | favorite | 95 comments










    In a way, it's similar to the connection between "boredom" and creativity. When you don't have much to do, you can do anything, including novel and awesome things. It, of course, takes the right kind of person, or a right group of persons. Give such people a way to not think about the daily bread, and allow them to build what they want to build, study what they want to study, think about what they want to think about.

    It feels anti-efficient. It looks wasteful. It requires faith in the power of reason and the creative spirit. All these things are hard to pull off in a public corporation, unless it's swimming in excess cash, like AT&T and Google did back in the day.

    Notably, a lot of European science in 16-19 centuries was advanced by well-off people who did not need to earn their upkeep, the useless, idle class, as some said. Truth be told, not all of them advanced sciences and arts though.

    OTOH the rational, orderly living, when every minute is filled with some predefined meaning, pre-assigned task, allows very little room for creativity, and gives relatively little incentive to invent new things. Some see it as a noble ideal, and, understandably, a fiscal ideal, too.

    Maybe a society needs excess sometimes, needs to burn billions on weird stuff, because it gives a chance to to something genuinely new and revolutionary to be born and grow to a viable stage. In a funny way, the same monopolies that gouge prices for the common person also collect the resources necessary for such advances, that benefit that same common person (but not necessarily that same monopoly). It's an unsetllting thought to have.



    To some extent, the NSF did that. My graduate education was funded by the NSF, and my research didn't have an obvious practical purpose, except to enable further research.

    Today, I'm in a corporate research role, and I'm still given a lot of freedom. I'm also genuinely interested in practical applications and I like developing things that people want to buy, but my ability to do those things owes a lot to the relatively freewheeling days of NSF funding 30+ years ago.



    This is what I think the biggest benefit to having a significant UBI. Sure, lots of folks who currently are in “bullshit jobs” would sit around and watch one screen or another but! A lot, probably more than we imagine, would get bored and… do something. Often that something would be amazing.

    But lizard brains gotta keep folks under their thumb and horde resources. Alas.



      > but! A lot, probably more than we imagine, would get bored and… do something.
    
    I'm of the same belief. We're too antsy of creatures. I know in any long vacation I'll spend the first week, maybe even two (!), vegging out doing nothing. But after that I'm itching to do work. I spent 3 months unemployed before heading to college (laid off from work) and in that time taught myself programming, Linux, and other things that are critical to my career today. This seems like a fairly universal experience too! Maybe not the exact tasks, but people needing time to recover and then want to do things.

    I'm not sure why we think everyone would just veg out WALL-E style and why the idea is so pervasive. Everyone says "well I wouldn't, but /they/ would". I think there's strong evidence that people would do things too. You only have to look at people who retire or the billionaire class. If the people with the greatest ability to check out and do nothing don't, why do we think so many would? People are people after all. And if there's a secret to why some still work, maybe we should really figure that out. Especially as we're now envisioning a future where robots do all the labor.



    UBI might work in the short-term, but as more and more people are having kids (and learning from parents on UBI, to also get UBI), we would run out of people actually working and paying the taxes to support it.


    This assumes that most people would be satisfied with UBI and not attempt to make more money.


    UBI isn't going to get us there. Give everyone more cash and the rent-seeking _WILL_ suck harder. Same problem with blindly raising the minimum wage and not instead addressing the root issue.

    Basic econ 101: inelastic demand means supply can be as expensive as the limited number who are lucky enough to get it are able to afford.

    Bell Labs, generally think tanks, they work by paying _enough_ to raise someone to the capitalist society equivalent of a Noble.

    Want to fix the problem for everyone in society, not just an 'intellectual elite'? Gotta regulate the market, put enough supply into it that the price is forced to drop and the average __PURCHASE POWER__ raises even without otherwise raising wages.



    This has been tried, very honestly, and it mostly sucked, then crashed. The calculation argument [1] kills it. The optimization problem which the market solves in a chaotic and decentralized way through price discovery and trading is intractable otherwise, not with all the computing power of the planet. It also requires prediction of people's needs (ignoring desires), and it's a problem more ill-posed than prediction of weather.

    The market of course needs regulation, or, rather, stewardship: from protection of property rights all the way to limiting monopolies, dumping, etc. The market must remain free and varied in order to do its economic work for the benefit of the society. No better mechanism has been invented for last few millennia.

    Redistribution to provide a safety net to those in trouble is usually a good thing to have, but it does not require to dismantle the market. It mostly requires an agreement in the society.

    [1]: https://en.m.wikipedia.org/wiki/Economic_calculation_problem



    That’s the advantage to UBI.

    A revenue neutral UBI check at some subsistence level and killing all other government assistance including lower tax brackets would in the short term significantly lower the standard of living for many low income Americans and boost others. However people would try and maximize their lifestyle and for most people that would be through working. Others would opt out and try and make being really poor work for them.

    Essentially you remove central planning around poverty and as the government stops requiring rent stabilized apartments etc. Which in the short term pushes a lot of poor people out of major cities but simultaneously puts upward pressure on wages to retain those workers and pushes down rents via those suddenly available apartments. It doesn’t actually create or destroy wealth directly, you just get a more efficient allocation of resources.



    There's a catch. If enough people opt for not working, the level of UBI may go below the level of survival for some time. This will push those who can work and don't want to tolerate it to go find work. But those who cannot work much, or at all, like disabled people, would be facing hunger, and would be unable to afford the special stuff they need to survive (like medicine or home aid). They might just die from that.

    This returns us back to the problem of some guaranteed payments to those we don't want to let die, and maybe want to live not entirely miserably, and the administration thereof.

    Another danger is the contraction of the economy: businesses close, unable to find workers, the level of UBI goes down, people's income (UBI + salary) also goes down, and they can afford fewer goods, more businesses close, etc. When people try to find work because UBI is not enough, there may be not enough vacancies, until the economy spins up again sufficiently. It's not unlike a business cycle, but the incentive for a contraction may be stronger.



    We should still keep the progressive income tax. UBI can even be implemented as NIT

    Adding a land tax too, now that would be, that would really, that would fix some things



    > and it mostly sucked

    Citation needed. If you're referring to the USSR, please pick an economic measure that you think would have been better, and show why the calculation problem was the cause of its deficiency. USSR was incredibly successful economically, whether it was GDP growth, technological advancement, labor productivity, raw output, etc. Keep in mind all of this occurred under extremely adverse conditions of war and political strife, and starting with an uneducated agrarian population and basically no capital stock or industry.

    The Austrian economist Hans-Hermann Hoppe writes of Hayek's calculation problem:

    > [T]his is surely an absurd thesis. First, if the centralized use of knowledge is the problem, then it is difficult to explain why there are families, clubs, and firms, or why they do not face the very same problems as socialism. Families and firms also involve central planning. The family head and the owner of the firm also make plans which bind the use other people can make of their private knowledge […] Every human organization, composed as it is of distinct individuals, constantly and unavoidably makes use of decentralized knowledge. In socialism, decentralized knowledge is utilized no less than in private firms or households. As in a firm, a central plan exists under socialism; and within the constraints of this plan, the socialist workers and the firm’s employees utilize their own decentralized knowledge of circumstances of time and place to implement and execute the plan […] within Hayek’s analytical framework, no difference between socialism and a private corporation exists. Hence, there can also be no more wrong with the former than with the latter.



    > Basic econ 101: inelastic demand means supply can be as expensive as the limited number who are lucky enough to get it are able to afford.

    In the same basic econ 101, you learn that real estate demand is localized. UBI allows folks to move to middle of nowhere Montana.



    It's not about excess.

    Look at some of the most famous success stories in comedy, art, music, theatre, film, etc.

    A good number of them did their best work when they were poor.

    "Community" is a great example. Best show ever made, hands down. Yet they were all relatively broke and overworked during the whole thing.

    It's because they believed in the vision.



    I know this is going to be an unpopular take, but isn't the idea of socialism that you make a unitary democratic government fill the role of Huge Monopoly Foundation so you can do stuff like fund research labs and be accountable to the public?


    It's the statist idea. Socialism in practice usually involves regulating the market heavily, or into oblivion altogether, and giving the State a huge redistribution power. See my comment nearby on why such a setup fails to work.

    A socialism where the only way to work is to own a part of an enterprise (so no "exploitation"is possible) would likely work much better, and not even require a huge state. It would be rather inflexible though, or mutate back into capitalism as some workers would accumulate larger shares of enterprises.



    Having some kind of default steward for market developments that get so competitive and fundamental that they reach full market saturation is helpful. Under a market system, at that scale, the need for growth starts to motivate companies to cut corners or squeeze their customer base to keep the numbers going up. You either end up pricing everyone out (fixed supply case) or the profit margins get so slim that only a massive conglomerate can break even (insatiable demand case). This is why making fundamental needs and infrastructure into market commodities doesn't work either.

    The problem with social democracy is that it still gives capitalists a seat at the table and doesn't address the fundamental issues of empowering market radicalism. Some balance would be nice, but I don't really see that happening.



    Across the OECD average government spending is 46% of GDP.

    https://www.oecd.org/en/topics/policy-issues/public-finance-...

    How is that 'market radicalism' ?

    How is government spending ~25 trillion USD a year somehow not considered?



    Sounds like distributism.


    Hardly. Socialism is about workers/communities owning the means of production. Research labs these days are mostly funded by the public. That's just about allocation of government resources.


    This is what I wished academia would be. I'm finishing my PhD and despite loving teaching and research (I've been told I'd make a good professor, including from students) I just don't see the system doing what it should. Truthfully, I'm not aware of any such environment other than maybe a handful of small groups (both in academia and industry).

    I think we've become overly metricized. In an effort to reduce waste we created more. Some things are incredibly hard to measure and I'm not sure why anyone would be surprised that one of those things is research. Especially low level research. You're pushing the bounds of human knowledge. Creating things that did not previously exist! Not only are there lots of "failures", but how do you measure something that doesn't exist?

    I write "failure" in quotes because I don't see it that way, and feel like the common framing of failure is even anti scientific. In science we don't often (or ever) directly prove some result but instead disprove other things and narrow down our options. In the same way every unsuccessful result decreases your search space for understanding where the truth is. But the problem is that the solution space is so large and in such a high dimension that you can't effectively measure this. You're exactly right, it looks like waste. But in an effort to "save money" we created a publish or perish paradigm, which has obviously led to many perverse incentives.

    I think the biggest crime is that it severely limits creativity. You can't take on risky or even unpopular ideas because you need to publish and that means passing "peer review". This process is relatively new to science though. It didn't exist in the days of old scientists you reference[0]. The peer review process has always been the open conversation about publications, not the publications themselves nor a few random people reading it who have no interest and every reason to dismiss. Those are just a means to communicate, something that is trivial with today's technologies. We should obviously reject works with plagiarism and obvious factual errors, but there's no reason to not publish the rest. Theres no reason we shouldn't be more open than ever[1]. But we can't do this in a world where we're in competition with another. It only works in a world where we're united by the shared pursuit of more knowledge. Otherwise you "lose credit" or some "edge".

    And we're really bad at figuring out what's impactful. Critically, the system makes it hard to make paradigm shifts. A paradigm shift requires a significant rethinking of the current process. It's hard to challenge what we know. It's even harder to convince others. Every major shift we've seen first receives major pushback and that makes it extremely difficult to publish in the current environment. I've heard many times "good luck publishing, even if you can prove it". I've also seen many ideas be put on the infinite back burner because despite being confident in the idea and confident in impact it's known that in the time it'd take to get the necessary results you could have several other works published, which matters far more to your career.

    Ironically, I think removing these systems will save more money and create more efficient work (you're exactly right!). We have people dedicating their lives to studying certain topics in depth. The truth is that their curiosity highly aligns with what are critical problems. Sometimes you just know and can't articulate it well until you get a bit more into the problem. I'm sure this is something a lot of people here have experienced when writing programs or elsewhere. There's many things that no one gets why you'd do until after it's done, and frequently many will say it's so obvious after seeing it.

    I can tell you that I (and a large number of people) would take massive pay cuts if I could just be paid to do unconditional research. I don't care about money, I care about learning more and solving these hard puzzles.

    I'd also make a large wager that this would generate a lot of wealth for a company big enough to do such a program and a lot of value to the world if academia supported this.

    (I also do not think the core ideas here are unique to academia. I think we've done similar things in industry. But given the specific topic it makes more sense to discuss the academic side)

    [0] I know someone is going to google oldest journal and find an example. The thing is that this was not the normal procedure. Many journals, even in the 20th century, would publish anything void of obvious error.

    [1] put on open review. Include code, data, and anything else. Make comments public. Show revisions. Don't let those that plagiarize just silently get rejected and try their luck elsewhere (a surprisingly common problem)



    > The freedom to waste time. The freedom to waste resources. And the autonomy to decide how.

    As the article notes, several companies (Apple, Google, etc.) could (currently) afford to fund such a lab, but there is no way their management and shareholders would approve.

    There's a reason for this: research labs seem to benefit competitors as much as (or more than) the companies that fund them. This wasn't an issue for AT&T when it was a monopoly, but it is now. Personally I don't see it as a problem (since one home run innovation could pay for the entire lab) but company managers and shareholders do.

    On the other hand, Apple does seem to have a de facto AI lab with a good deal of resource waste, so maybe that's good.



    >> The freedom to waste time. The freedom to waste resources. And the autonomy to decide how.

    > As the article notes, several companies (Apple, Google, etc.) could (currently) afford to fund such a lab, but there is no way their management and shareholders would approve.

    Google did set up such a lab. The mission of Google Brain was literally to hire smart people and let them do work on whatever they want. ("Google Brain team members set their own research agenda, with the team as a whole maintaining a portfolio of projects across different time horizons and levels of risk." -- https://research.google.com/teams/brain/). Unsurprisingly, Google Brain is the place that originated the Transformer that powers the current AI craze (and many, many, many other AI innovations).



    And they shut it down. In 2023.

    The current tech giants spend a lot of money on "research," where research means optimizing parts of the product line to the 10^nth order of magnitude.

    Arguably, Google Brain was one such lab. Albeit with more freedom than normal.

    Which is fine, it's their money. But then they (and the broader public) shouldn't bemoan the lack of fundamental advances and a slowdown in the pace of discovery and change.



    "And they shut it down. In 2023"

    You mean they renamed it/merged it with another group that has similar freedom and focus on research



    What’s the name of the other group?


    Deepmind. I'd say it's actually more reputable than Google Brain.


    Deepmind


    DeepMind.


    Interestingly enough, even non-monopoly large corporations once had labs where researchers had a good deal of freedom and where the projects were not required to be directly tied to business objectives. Hewlett-Packard, Digital Equipment Corporation, Sun Microsystems, Fujitsu, Sony, NEC, Toshiba, and Hitachi, just to name a few, had labs back in the 80s, 90s, and 2000s. As late as the early 2010s, a PhD graduate in computer science had options in industry to do research that wasn’t tied to short-term business priorities.

    Unfortunately these opportunities have dried up as companies either got rid of their research labs or shifted the focus of their research labs to be more tied to immediate business needs. Many of my former classmates and colleagues who were industrial researchers are now software engineers, and not due to intentionally changing careers. Academia has become the last bastion of research with fewer commercialization pressures, but academia has its “publish or perish” and fundraising pressures, and now academia is under attack in America right now.

    I once worked as a researcher in an industrial lab, but the focus shifted toward more immediate productization rather than exploration. I ended up changing careers; I now teach freshman- and sophomore-level CS courses at a community college. It’s a lot of work during the school year, but I have roughly four months of the year when I could do whatever I want. Looking forward to starting my summer research project once the semester ends in a few weeks!



    > As the article notes, several companies (Apple, Google, etc.) could (currently) afford to fund such a lab, but there is no way their management and shareholders would approve.

    When I was at Apple for several years, there were definitely at least two such groups.



    Google has DeepMind, Microsoft has Microsoft Research, Meta has FAIR.

    It’s not trivial to foster such environments, but they do still exist in different forms.



    Facebook dumped $60B into an AI universe and HN made fun of them for it.


    This is why I think we need publicly funded open source projects with paid leads. There are so many basic things we've failed to do for ourselves and our fellow human beings.

    For example, the best non-AI TTS system is still Ivona TTS that originated at Blizzard in like 2007. The best open source solution is espeak and it's permanently stuck in 1980... Ivona was bought up by Amazon and now they don't even use the original software, but do charge money per word to use the voice via Amazon Polly. They could open source it, but they don't.

    We don't even have something as basic as text to speech freely available, whether you are disabled or not. That is a problem. You have this amazing innovation that still holds to this day, squandered away for nothing.

    Why can't we just have an institute that develops these things in the open, for all to use? We clearly all recognize the benefit as SysV tools are still used today! We could have so many amazing things but we don't. It's embarrassing



    I left a tenured position after getting fed up with several things, among which the same grant proposal getting a “it’s too visionary” from a reviewer and “it’s trivial” from another. If it’s such a coin toss, F off will ya?


    As an interesting counterpoint to the idea of "just hire smart people and give them a lab", Ralph Gomory, head of IBM Research (a peer of Bell Labs in its day) from 1970-86 said:

    > There was a mistaken view that if you just put a lab somewhere, hired a lot of good people, somehow something magical would come out of it for the company, and I didn't believe it. That didn't work. Just doing science in isolation will not in the end, work. [...] It wasn't a good idea just to work on radical things. You can't win on breakthroughs - they're too rare. It just took me years to develop this simple thought: we're always going to work on the in-place technology and make it better, and on the breakthrough technology. [0]

    [0] https://youtu.be/VQ0PBve6Alk?t=1480



    Every breakthrough needs many 'man years' of effort to bring to market. research is good but for every researcher we need several thousand who are doing all the hard work of getting the useful things to market in volumn.


    Speaking of which https://substack.com/home/post/p-115930233 :

    > John Pierce once said in an interview, asserting the massive importance of development at Bell:

    >> You see, out of fourteen people in the Bell Laboratories…only one is in the Research Department, and that’s because pursuing an idea takes, I presume, fourteen times as much effort as having it.



    RCA tried to duplicate Bell Labs' success and it arguably bankrupted the company.


    Eric Gilliam's "How did places like Bell Labs know how to ask the right questions?" https://www.freaktakes.com/p/how-did-places-like-bell-labs-k... came to a similar conclusion. (It did well here just a couple of months ago, too https://news.ycombinator.com/item?id=43295865 , so it is a little disappointing that the discussion seems to be starting from the beginning here again.) Another point which you've both made is that other big US firms had very important industrial research labs, too. (RCA Labs is one that seems to get little love these days, at least outside the pages of We Were Burning https://www.hachettebookgroup.com/titles/bob-johnstone/we-we... Also, to be fair, "Areoform" did mention Xerox PARC once in TFA.) Indeed, overstating the uniqueness of Bell Labs helps to billow up the clouds of mystique, but it's probably harmful to a clear understanding of how it actually worked.

    But the ultimate problem with TFA is that it seems to be written to portray venture capitalists(?), or at least this group of VCs who totally get it, as on the side of real innovation along with ... Bell Labs researchers(?) and Bell Labs executives(?) ... against the Permanent Managerial Class which has ruined everything. Such ideas have apparently been popular for a while, but I think we can agree that after the past year or two the joke isn't as funny as it used to be anymore.



    If you want to know the research culture and the environment of Bell Labs from author's first hand experiences, I'd highly recommended this book by Hamming [1].

    [1] The Art of Doing Science and Engineering by Richard W. Hamming:

    https://press.stripe.com/the-art-of-doing-science-and-engine...



    My dad was at this talk in 1986 that PG shares on his blog:

    https://paulgraham.com/hamming.html

    Said it was amazing.



    It's been out of stock for nearly a year. Interesting in a post talking about AT&T and Bell Labs to point out that Stripe struggles to maintain an inventory of niche printed books.


    You're welcome to borrow my copy, feel free to ping me.


    ctrl+f "national lab" = 0 results

    Hello? We have 17(!) federally funded national labs, full of scientists doing the work this article waxes nostalgic about. Through the Laboratory Directed Research and Development (LDRD) program they afford employee scientists the ability to pursue breakthrough research. However, they are facing major reductions in funding now due to the recent CR and the upcoming congressional budget!



    From this blog's About page: "In 2010, our team cofounded the Thiel Fellowship with Peter Thiel..."

    Make of that what you will.



    You have to be willing to not have things guaranteed to "work." Don't just look at the best case. Investigate and discuss how many versions of Bell Labs didn't "work."

    If you just look at the success stories, you could say that today's VC model works great too - see OpenAI's work with LLMs based on tech that was comparatively stagnating inside of Google's labs. Especially if nobody remembers Theranos in 50 years. Or you could say that big government-led projects are "obviously" the way to go (moon landing, internet).

    On paper, after all, both the "labs" and the VC game are about trying to fund lots of ideas so that the hits pay for the (far greater) number of failures. But they both, after producing some hits, have run into copycat management optimization culture that brings rapid counter-productive risk-aversion. (The university has also done this with publish-or-perish.)

    Victims of their own success.

    So either: find a new frontier funding source that hasn't seen that cycle yet (it would be ironic if some crypto tycoon started funding a bunch of pure research and that whole bubble led to fundamental breakthroughs after all, hah) or figure out how to break the human desire for control and guaranteed returns.



    If you haven't already, check out the "AT&T Archives" on the AT&T Tech Channel on YouTube. It's an absolutely remarkable collection of American technology history.

    https://www.youtube.com/playlist?list=PLDB8B8220DEE96FD9



    I think it's complicated.

    A lot of large US tech corporations do have sizable research arms.

    Bell Labs is certainly celebrated as part of a telephone monopoly at the time though AT&T actually pulled out of operating system development related to Multics and Unix was pretty much a semi-off-hours project by Ritchie and Thompson.

    It's true that you tend not to have such dominant firms as in the past. But companies like Microsoft still have significant research organizations. Maybe head-turning research advancements are harder than they used to be. Don't know. But some large tech firms are still putting lots of money into longer-term advances.



    > The reason why we don't have Bell Labs is because we're unwilling to do what it takes to create Bell Labs — giving smart people radical freedom and autonomy.

    My observation has been that smart people don't want this anymore, at least not within the context of an organization. If you give your employees this freedom, many will take advantage of it and do nothing.

    Those that are productive, the smartest who thrive in radical freedom and autonomy, instead choose to work independently. After all, why wouldn't they? If they're putting in the innovation the equity is worth way more than a paycheck.

    Unfortunately, that means innovation that requires a Bell Labs isn't as common. Fortunately, one person now can accomplish way more than a 1960's engineer could and the frontier of innovation is much broader than it used to be.

    I used to agree with the article's thesis but it's been nearly impossible to hire anyone who wants that freedom and autonomy (if you disagree, @gmail.com). I think it's because those people have outgrown the need for an organization.



    This has not been my experience at all. I worked on a team with substantial autonomy and agency for a few years, and most people—not everyone, sure, but almost—naturally rose to the occasion.

    People want to do good work and people want to feel like they're doing good work. If you create an environment where they feel trusted and safe, they will rise to your expectations.

    I had way more trouble with people working too hard but with misaligned ideas of what "good" meant—and stepping on each other's toes—than with anyone slacking off. It's easy to work around somebody who is merely ineffectual!

    And, sure, a bunch of stuff people tried did not work out. But the things that did more than made up for it. Programming and quantitative modeling are inherently high-leverage activities; unless leadership manages out all the leverage in the name of predictability, the hits are going to more than make to for the flubs.



    Doing work on a team isn't really what the article is discussing though. I'm referring to the very research-y skunkworks-style autonomy.

    I am well aware that people in companies can work effectively on teams and that people rise to the occasion in that context. If it didn't work, companies wouldn't hire. But that's not what the article is about.



    > If you give your employees this freedom, many will take advantage of it and do nothing

    This was addressed in the article

    > Most founders and executives I know balk at this idea. After all, "what's stopping someone from just slacking off?" Kelly would contend that's the wrong question to ask. The right question is, "Why would you expect information theory from someone who needs a babysitter?"

    also this hilarious quote from Richard Hamming:

    > "You would be surprised Hamming, how much you would know if you worked as hard as [Tukey] did that many years." I simply slunk out of the office!



    Yeah, that's the point of my next sentence. Why would someone who comes up with information theory want to give it to an employer?

    I think an answer to that was a lot clearer in the 1960's when going from idea to product was much harder.



    "The only secret worth keeping is out: the damn things work".

    What products could Shanon have made only knowing information theory? Or CSRO knowing only ODFM solved multipath? Did Bob Metcalf make more money when everyone had Ethernet or if he'd licensed it much more exclusively?

    It's very hard for a single fundamental result to be a durable competitive advantage compared to wider licensing on nicer terms. That's particularly true when much else goes into the product.



    Shannon did a lot more than just information theory. In fact, anyone who fits the autonomy persona does because that was part of the definition.

    Sure, licensing information theory is a bit of a stretch, but Shannon literally built one of the first artificial intelligence machines [1]. 2025 Shannon would've been totally fine building his own company.

    If you see these idols through their singular achievements, then yes of course it's hard to imagine them outside the context of a lab, but rarely are these innovators one trick ponies.

    By the way, Bob Metcalfe did indeed start his own company and became pretty successful in doing so.

    [1] https://en.wikipedia.org/wiki/Claude_Shannon#Artificial_Inte...



    Maybe the 2025 Bell Labs is the wider ecosystem of VCs and free floating innovators who end up starting startups instead of doing things in house.

    I do think there is a lot less low hanging fruit which makes the comparison apples and oranges. Google is like Bell Labs today, and what did they invent? LLMs? Compare that to information theory, the transistor, Unix, etc.



    > Maybe the 2025 Bell Labs is the wider ecosystem of VCs and free floating innovators who end up starting startups instead of doing things in house.

    Yep, agree with this statement. That's exactly what I think happened.



    I have no idea what Bell Labs was like on the inside, but the startups I've been involved in didn't leave a lot of room for experimentation, trying and failing.

    Quite the opposite, always a mad rush towards profit at any cost.



    > it's been nearly impossible to hire anyone who wants that freedom and autonomy

    Interesting, this is something that I'd love to do! I'm already planning on pursuing custom chip design for molecular simulation, but I don't really want to handle the business side of things. I'd much rather work in a paid lab than get rich and sell it off. Plus, you can do so much more with a team vs being independent.

    I was also homeschooled though (unschooling and tjed philosophy) so I've always been picking my own projects. Sometimes I wonder if the lack of generalist researchers comes down to education (another thing I'd love to pursue).



    “Smart people don’t need organizations anymore.” I get it—going solo is more appealing now than ever. But I can’t help thinking: some things really only happen in a kind of shared magnetic field. Not because you can’t do it alone, but because that moment when another smart person lights you up— that doesn’t happen in solo mode.


    Yeah I completely agree. I see it more like the benefits of going solo have eclipsed the benefits of a team in an organization.

    I don't think it's a strictly better environment but in many dimensions going solo is now better than any company. I do often long for that shared magnetic field though.



    Hypothetical slackers didn't stop great work from coming out of the lab. I'm not sure why today would be any different.


    The birthed an industry based on electrical properties that were barely understood. They also ended up needing a very dynamic metering and accounting system. Apple can get away with a more unified workforce because their needs are known and not unique.


    If your needs are known, they are also known to competitors.

    I know that Elon Musk is not a popular figure nowadays, but he very correctly stated that competition is for losers, and the real innovators build things that competitors are just unable to copy for a long time, let alone exceed. SpaceX did that. Google, arguably, did that, too, both with their search and their (piecemeal acquired) ad network. Apple did that with iTunes.

    Strive to explore the unknown when you can, it may contain yet-unknown lucrative markets.

    (There is, of course, an opposite play, the IBM PC play, when you create a market explosion by making a thing open, and enjoy a segment of it, which is larger than the whole market would be if you kept it closed.)



    > I’m so excited about programs like 1517’s Flux that invests $100k in people, no questions asked and lets them explore for a few months without demanding KPIs or instantaneous progress.

    If Bell Labs let people xplore for multiple years, a few months probably isn't enough time.



    That's absolutely true! But we aren't a multi-billion dollar corporation with a war chest in the billions so sadly this is the best we can do. :(


    The focus on investing in what actually matters instead of being distracted by virtue signaling for ideological culture wars no doubt also had a huge influence.


    A related program was Lockheed's Skunkworks.

    There have been many attempts to replicate the success of the Skunkworks, but they've all failed because the organizers thought they could improve on it.



    Because its easy to be 1st


    What if it's a trick?

    You start by creating a myth: "this place breeds innovation". Then, ambitious smart people wanting to innovate are drawn to it.

    Once there, there are two ways of seeing it: "it was just a myth, I'll slack off and forget about it" or "the myth is worthwhile, I'll make it real".

    One mistake could end it all. For example, letting who doesn't believe outnumber or outwit those who "believe the myth".

    So, small pieces: A good founding myth (half real, half exaggerated), people willing to make it more real than myth, pruning off who drags the ship down.

    Let's take that "productivity" from this myth perspective. Some people will try to game it to slack off, some people will try to make the myth of measuring it into reality (fully knowing it's doomed from the start).

    A sustainable power of belief is quite hard to put into a formula. You don't create it, you find it, feed it, prune it, etc. I suspect many proto Bell Labs analogues exist today. Whenever there's one or two people who believe and work hard, there is a chance of making it work. However, the starting seed is not enough by its own.

    If you ask me, the free software movement has plenty of supply of it. So many companies realized this already, but can't sequester the myth into another thing (that makes monry), even though free software already makes tons of (non monetary) value.



    A close family member worked at Bell labs during the cold war era. According to them,

    The reason is very simple. There was a big picture motivation: the war, followed by the cold war. Once the big picture motivation wasn't there anymore, that sort of organizational structure(or lack of it) does not work the same way. What ends up happening is what a sibling comment has noted:

    > My observation has been that smart people don't want this anymore, at least not within the context of an organization. If you give your employees this freedom, many will take advantage of it and do nothing.

    You might say, but `grep` wasn't used for war! Correct, but it came up as a side effect of working on much larger endeavours that tied into that bigger picture.

    This has been true for most of recent human history. You might know this already, but Fourier was part of most of Napoleon's expeditions, and his work on decomposing waveforms arose out of his work on the "big picture": ballistics.



    So what you’re saying is it will take a war fought by autonomous robots on behalf of two massively wealthy adversarial nations in order to finally get us a robot that can do the dishes?


    They are called 'Dishwashers'


    Did Xerox PARC have just as many hits as Bell Labs?


    >During WW2, Bell Labs reversed engineered and improved on the British Magnetron within 2 months.

    um... the UK sent the magnetron they had recently invented (1940) to the US in a spirit of wartime cooperation and because their own research and industrial base was already maxed out at the time. pretty sure they sent an owners manual and schematics too. probably even some people?

    (magnetrons, for generating microwaves, were the essential component for radar)



    In the spirit of wartime cooperation is putting it nicely.

    The magnetron was one of several technologies that the UK transferred to the USA in order to secure assistance in the war effort.

    https://en.m.wikipedia.org/wiki/Tizard_Mission



    I'm quoting their research summary. By reverse engineering, it means that they figured out why the magnetron worked and then optimized it. They X-Rayed it, found a deviation from plans, then developed a model to understand why there was a deviation in performance.

        However examples No. 11 and 12 had the number of resonators increased to 8 in order to maximise the efficiency of the valve with the magnetic field provided by the then available permanent magnet, E1189 also incorporated cooling fins to enable the device to be air rather than water cooled. 
        
        Sample No.12 was taken to the USA by E. Bowen with the Tizard mission and upon testing at Bell Labs produced 10 times the power at 5 times the frequency of the best performing American triodes. A certain amount of confusion arose as the drawings taken by Bowen still showed the 6 resonator anode but an X-Ray picture taken at Bell Labs revealed the presence of 8 resonators.
        
        The E1189 or its Navy equivalent NT98 was used in the Naval radar type 271 which was the Allies first operational centimetric radar. The early RCM’s like the E1189 were prone to mode jumping (frequency instability) under pulse conditions and the problem was solved in by means of strapping together alternate segments a process invented by Sayers in 1942. Strapping also considerably increased the magnetron’s efficiency. 
    
    
    via, https://www.armms.org/media/uploads/06_armms_nov12_rburman.p...

    and another account, https://westviewnews.org/2013/08/01/bell-labs-the-war-years/...



    >the problem was solved in by means of strapping together alternate segments a process invented by Sayers in 1942

    UK physicist James Sayers was part of the original team that developed the magnetron in the UK. He did join the Manhattan Project in 1943, so perhaps before that he came over to the US (to Bell Labs) as part of the radar effort: in that case strengthening Bell Labs contributions, weakening any claim to reverse engineering :) When Lee de Forest "invented" the triode tube amplifier, he had no idea how it worked. When Shockley "invented" the transistor, his team grumbled that he had stolen their work (similar to Steve Jobs, the boss, taking over the Macintosh project when his own Lisa project failed) but in any case, it was not actually understood yet how transistors worked. "How the First Transistor Worked: Even its inventors didn’t fully understand the point-contact transistor" https://spectrum.ieee.org/transistor-history

    In these cases, the bleeding edge of R and the bleeding edge of D were the same thing. A certain amount of "reverse engineering" would have been mandatory, but it's really "reverse sciencing", "why did my experiment turn out so well", rather than "reverse engineering a competitor's product to understand how did they make it work so well."

    https://en.wikipedia.org/wiki/MIT_Radiation_Laboratory

    In early 1940, Winston Churchill organized what became the Tizard Mission to introduce U.S. researchers to several new technologies the UK had been developing. Among these was the cavity magnetron, a leap forward in the creation of microwaves that made them practical for use in aircraft for the first time. GEC made 12 prototype cavity magnetrons at Wembley in August 1940, and No 12 was sent to America with Bowen via the Tizard Mission, where it was shown on 19 September 1940 in Alfred Loomis’ apartment. The American NDRC Microwave Committee was stunned at the power level produced. However Bell Labs director Mervin Kelly was upset when it was X-rayed and had eight holes rather than the six holes shown on the GEC plans. After contacting (via the transatlantic cable) Dr Eric Megaw, GEC’s vacuum tube expert, Megaw recalled that when he had asked for 12 prototypes he said make 10 with 6 holes, one with 7 and one with 8; and there was no time to amend the drawings. No 12 with 8 holes was chosen for the Tizard Mission. So Bell Labs chose to copy the sample; and while early British magnetrons had six cavities American ones had eight cavities... By 1943 the [Rad Lab] began to deliver a stream of ever-improved devices, which could be produced in huge numbers by the U.S.'s industrial base. At its peak, the Rad Lab employed 4,000 at MIT and several other labs around the world, and designed half of all the radar systems used during the war.

    that seems to be the source of the reverse engineering idea, and I think Bell Labs' role (which is quite important) was more toward perfecting the devices for manufacture at scale, as it was an arm of a giant leading edge industrial company.

    I'm not diminishing Bell Labs nor anybody there, it was a lot of smart people.



    > as part of the radar effort

    Something I've been curious about and thought I'd ask the room here since it was mentioned.

    It seems to me that "the radar effort" was very significant, almost Manhattan Project levels itself. In every book about scientists in WW2 or the atomic bomb that I've read, it seemed everyone had a friend "working on radar" or various scientist weren't available to work on the bomb because they were, again, "working on radar."

    Was this true or just something I'm overanalyzing?



    It's very true.

    Guess who pioneered the venerable Silicon Valley, it's HP (then Agilent, now Keysight). Their first killer product was the function (signal/waveform) generator. HP basically the Levi's of the radar era, making tools for the radar/transistor/circuit technology gold rush.

    One of the best academic engineering research labs in the world for many decades now is MIT Lincoln Lab, and guess what it's a radar research lab [1].

    I can go on but you probably get the idea now.

    [1] MIT Lincoln Laboratory:

    https://www.ll.mit.edu/



    If I was a billionaire, that would be a fun thing to do:

    1. find 100 highly motivated scientists and engineers

    2. pay them each $1m/year

    3. put them in a building

    4. see what happens!



    To be fair you only need to pay sustenance or opportunity cost so closer to 100k per person should be fine especially outside of USA


    I figured the $1m would also fund the equipment and supplies they'd need.


    The hard part is picking the right people.


    The way to do it is the way top engineers and scientists were recruited for the Manhattan Project. You go around to universities and talk to the professors, who will know who the highly motivated people are.


    What I look for in engineers would be what I’d look for here: “I have no idea how to do that, let me get started.”


    My guess is not much unless you give them a specific problem, and create a hierarchy.

    Otherwise things will just fragment into cliques and fights, like any university department.



    What would there be to fight over? They each have a $1m budget.


    In the Manhattan project the scientists were unified by a noble goal. Money don't buy that. Without such a goal, you'll get a hundred scientists each pulling in his own direction in order to get rich.


    Have you ever seen an academic department?

    Surely the lab scientists and engineers would assert that they need a bigger budget than the mathematicians, and so on.



    really interested in 1517 now. if i could build a new bell labs i would use a hedge fund structure instead of vc. the lab would attract talent and some of it might get invovled in the fund, but the lab would stay afloat on fund profits. the idea is to just let smart people cook for a tryout year, with great incentives. (not too unlike 1517's program as it turns out)

    the difference with this lab idea and a vc like YC is that vc portfolio companies need products and roadmaps to raise investment and for driving revenue. whereas an asset manager is just investing the money and using the profits to fund engineering research and spinoff product development.

    firms like this must already exist, maybe i just never hear about their spinoffs or inventions? if not, maybe a small fund could be acquired to build a research division onto it



    Bell Labs wasn't the only loss. HP Labs was another victim of LBO cannibalism.


    And Xerox, and IBM went through spits and bumps, IBM Boca Raton, IBM Tully Road. Fairchild. Colombia Physics. The Manhattan project. Princeton. Cornell. Bleachey Park.


    MIT Media lab.






    Consider applying for YC's Summer 2025 batch! Applications are open till May 13


    Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



    Search:
    联系我们 contact @ memedata.com