(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=37985176

总的来说,讨论强调了设计现代软件应用程序的几个挑战,特别是在网络技术的使用方面。 首先,静态编译和链接等传统方法确实允许更简单的架构和更少的性能妥协。 然而,向模块化、以动态语言为中心的生态系统的过渡面临着巨大的困难。 静态树整形至关重要,而使用 Web 技术可能导致的模块化损失带来了挑战。 此外,将大部分通用操作系统(例如 Microsoft 的 .NET Framework 或 Adob​​e Air)捆绑到大型应用程序中的趋势使情况进一步复杂化。 此外,虽然 Electron 提供了诸如提供对本机库的可访问性和简化到多个平台的部署等优势,但它增加了不必要的复杂性并使应用程序膨胀。 最终,开发人员的生产力(即使用动态语言进行编码的便捷性与性能妥协)之间会出现权衡。

相关文章

原文
Hacker News new | past | comments | ask | show | jobs | submit login
Software disenchantment (tonsky.me)
540 points by InsiderTesla 1 day ago | hide | past | favorite | 400 comments










Leaner, cleaner, less buggy, more secure, more performant, longer-lived code is obviously entirely possible. If people managed to do it at the dawn of the information age surely they can do it today, with multiple decades of massive experience, not to mention the incredibly powerful tools developed in the meantime.

If its not done its because there is no money in it. In fact the opposite.

The counter-incentives to wasting time on high quality software are numerous and affect all sorts of teams. VC funded startups must get to market first or die. Fake-it-till-you-make-it is their religion. For more mature organizations too, cost and bloat is not an issue. Its a feature. The bigger the team more prestige for the managers etc. The costs are passed on to clients anyway.

How come "ruthless market forces" don't rectify this wastefulness? You'd think that codebases of superior quality will earn the keys to the kingdom. They might, eventually. In a competitive environment that is less prone to pathologies, hypes etc.



Kagi is not VC funded, yet we have code that is sub-optimal, a ton of bugs and odd performance issues here and there. Knowing what I know about software development, I do not think this is avoidable, regardless of the source of funding or size of the company. It is a factor of complexity of the software, resources available and incentives in place.

What we can do though, that perhaps VC funded companies can not as easilly, is alocate time to refactor code and deal with tech debt. In fact that is what we are currently doing and we basically pulled a handbreak on all new feature developement for 45 days to deal with our main tech issues. Ability to do this requires a long term thinking horizon. Very difficult to make that kind of investment if you expect to get acquired next year and tech debt becomes somebody else's problem.

Also worth noting, as long as the product is being actively developed it will aways have new bugs and issues. 'Perfect code' is achieveable only in a closed context scenario, where new features are not added any more. (which randomly bring this weird thought to my head, that the only human that does not do any mistakes any more is a dead one; perfection in human actions is only achieved in the absence of life... ok need to stop there)



> which randomly bring this weird thought to my head, that the only human that does not do any mistakes any more is a dead one; perfection in human actions is only achieved in the absence of life... ok need to stop there

Love a good philosophical tangent! Wish you expanded :)



Let’s hope this doesn’t get taken up by a sentient AI in the future :-)


> If its not done its because there is no money in it. In fact the opposite

This bears repeating. It's the disease that has consumed software and is making all modern software the worst possible version of itself.

I think it's the VC funding model which has driven the industry this way. Startups get millions in funding, then it's a race to make enough money to pay back those investors which leads to this. The companies have to squeeze dollars from their app as fast as possible which means anything that doesn't have a ROI metric attached to it will not get a second of anyone's attention.



Honestly, there are many reasons for our current situation. One is that companies aren't (usually) run by engineers; they're run by product or business people. Those types don't care about performance, website footprints, smooth scrolling, etc. They care about adding new features, getting users, and doing so as fast as possible. Another reason is that many web developers were taught that software engineering is mashing together a mixture of Node, Ionic, Bootstrap, Vue.js, Angular, jQuery, etc. to quickly make a website. No one was taught how to do things on their own so they just bundle framework after framework into their projects just to do simple things. Finally, it's not like people built highly performant software in the 90s because they genuinely embodied this article's spirit; they did so out of necessity. As soon as computers got fast enough, we stopped having to focus on micro-optimizations just to get our products to run.


> Those types don't care about performance, website footprints, smooth scrolling, etc

they don't care because their _users_ don't care.

I find these discussions are always led by engineers, shaking their fists at clouds. nobody cares! it doesn't make any money so you're just whining into the void.



That's like saying only engineers care about appliances that work long past the warranty expiration; users don't so business people value engineer it and everyone except the engineers are happy.

Except that's not true. Users do care they just don't have a choice in the matter. I have listened to many, many laypeople who have expressed frustration with software. They may not be able to articulate it to the extent that the quote does, but when they're stuck on 3G and they need to load a webpage that keeps timing out they get frustrated even if they don't know its because of the footprint, it's a poorly made SPA, or whatever.



> they don't care because their _users_ don't care.

I respectfully disagree. I think the users care, but they don't make their own choices - their choices are made for them by people who don't care!

Was MS Teams chosen by their end-users? Nope.

Come to think of it - was Slack chosen by their end-users? No, again.

End-user's aren't given an option, usually:

1. For B2B the choice rests with one (or a few) people.

2. For B2C the choice is made purely because some product got some traction for reasons unrelated to its quality, and that was enough to force the rest of the users to follow or be left out of the network (Slack, Facebook, major shopping sites, etc).

The majority of end-users did not exercise any choice.



Pretty sure we had people spin up a Slack instance at our company while we were still on Hipchat officially. Slack was a big improvement over what we had before that IMO.


I think often users do care but they have no meaningful way to tell anyone this or otherwise cause change. I hate the banking web app (from a major US bank) that I have to use. It is incredibly slow, buggy, poorly laid out, and occasionally just decides to put up a spinner forever. Who do I complain to about this. If you call and tell a customer service rep they'll just tell you to refresh and try again, often sympathetically. They know its terrible and hear many complaints themselves, but also have no power to do anything about it.


This gives engineers a bit too much credit. We have a tendency to heavily over index on last year's problems. That or exploring edge cases that also make no bloody sense.

Indeed, the very existence of so many frameworks is also very easy to blame on errant engineering.



> This gives engineers a bit too much credit. We have a tendency to heavily over index on last year's problems. That or exploring edge cases that also make no bloody sense.

When I was in university (a long time ago, shortly after the big bang :-)) the informal motto of the computer science faculty and students was Computer Science: Solving Yesterdays Problems, Tomorrow!

Now that I think about it, I don't find it that funny anymore.



>> It's the disease that has consumed software

This is another example of Scott Alexander's Moloch: https://www.slatestarcodexabridged.com/Meditations-On-Moloch

"companies in an economic environment of sufficiently intense competition are forced to abandon all values except optimizing- for-profit or else be outcompeted by companies that optimized for profit better and so can sell the same service at a lower price."



I don't think software used to be more secure. As computer users we were more trusting, if not to say naive. As more machines connected to networks, we learned a thing or two.

Open access by default. No passwords [1] or short passwords. Then insecurely stored passwords. Everything in plaintext. Input sanitation? Why bother, only I can input data, I trust myself. Don't get me started on telnet.

I suspect that's another reason why software is more bloated. We started noticing things, how they interact with each other. And once you see something, you can't unsee it. The edge cases we have to account for are growing, not decreasing. There's more hardware to support too.

I'm sure the whole process of writing performant code can be improved. At the same time the bar is being raised faster than we can (or want to) reach it.

1 - And now we're inching towards no passwords again.



Security is indeed an important dimension because it holds the key (pun) for ever more important applications. I agree that a more detailed, like-for-like comparison of software qualities across decades needs to be very careful. Applications have exploded in all domains. But then also the ranks of software engineers and their tools and ability to exchange best practices has exploded.

The trouble with exponential curves is that a small difference in rates can create dramatic discrepancies.



> If its not done its because there is no money in it. In fact the opposite.

Exactly my thought. Incentives are not aligned. There are industry sectors where performance and correctness have value. If you care about the software craft in the same way as the aithor (as I do!) then the best way to enjoy work is to move to such industry sectors.



Care to provide examples of such sectors? Thanks.


Not GP, but IME researchers often run big, complex simulations, and often have to care deeply about performance. Correctness, less so, unfortunately. A lot of research software is written by researchers themselves with very little understanding of good software development principles and practices, and end up messy, overly complex, and buggy. That said, being a software developer within research can be the best of both worlds. You should be able to demonstrate (using perf tools, and comparing with existing comparable software) that you're squeezing a lot out of the hardware, while being able to construct maintainable, tested software.


The only one I have hands-on experience with is financial trading. My workplace is expanding into a new asset class and we're using Haskell for this. The company next door from us is a crypto-trading company using Rust.

I can only speculate about other sectors where it might make sense: - Hardware synthesis (i.e. the software used to design silicon chips); - Aerospace; - Defense; - Smart city devices;



>> There are industry sectors where performance and correctness have value.

> Care to provide examples of such sectors? Thanks.

I worked for years developing munitions control software.

Never once killed anyone by accident.



Anything involving hardware.

Except car manufacturers.



What makes car manufacturers different?


Not PP, but I used to work for an automotive subcontractor company, and I've heard a few stories about fatal car accidents that lead to lawsuits, which proved that the car design was the reason for the accident, yet the car manufacturer just payed "damages" to the relatives (or settled out of court, maybe) and never bothered to change the design. Apparently, it was more expensive to reconfigure their production pipelines than pay for an occasional death.

That said, this is probably what any big enough company would do. So your point still stands, maybe car manufacturers are no different.



The infotainment system, it’s basically as safe as a web stack (and on the CAN bus).


i think ordinary people frequently feel frustrated with low-quality software, but it seems to me not necessarily in a way they can consciously articulate. that is, they probably can tell when software is frustrating to use, but they don't notice a clear difference between:

- slow, high-latency software

- poor resource usage slowing down the whole machine, including from unrelated software

- bad user interface design

- bugs

- intentionally bad/manipulative patterns

if the customers can't even really perceive what it is they are buying, it's not surprising that market forces aren't solving the problem.

i'm not a user interface designer or researcher so of course this could be totally wrong -- just an impression from informal observation



your comment made me think that despite the main thrust of my comment, this trajectory is not completely money driven [1]. the incredible journey of the hardware side of technology (cpu speed, memory size, network bandwidth etc) has also played a role in this profligacy because it can cover for a lot of less optimal design

this is something that happens more widely in the use of resources: you build more highways and instead of relieving congestion you get more people commuting

[1] my open source and volunteer-built linux distribution (will not name names) routinely (like almost daily) prompts for GB sized system updates.



I'm not sure how well "ruthless market forces" are allowed to operate in software. You definitely get some disrupters but for the most part I think a lack of competition is part of the problem.

Microsoft Teams is one of the most buggy, unreliable pieces of software I know, but it's still the market leader in terms of share because of Microsoft's near monopoly of the office suite software world.



It may be worth considering that the market is not for messaging and workspaces, but mainly for integrated business communications systems. If the integration is more valuable to customers than Slack or Zoom's smooth operation, Teams is going to win out.


This is why we need stronger anti-monopoly enforcement.


At a data-centre scale they might still. Even marginal gains in efficiency translate to large savings in power consumption, heat management, and waste. If tech companies are properly taxed and regulated for these externalities, like say consuming billions of litres of fresh water during a drought in order to train a ML model, there would be a lot more pressure for this sort of efficiency.


This is where I normally end up in the software is way too slow today discussion.

Everyone claims to care about the environment until it impacts them even slightly. It's not even hard/time consuming to write good code. People are just overwhelmingly selfish and lazy.

Software devs consistently contribute to the environmental problems that many software devs then complain about and pretend is only a redneck issue. The reality is that we have an incredibly serious idiot issue and there isn't a solution due to the scale of the problem and the corruption preventing meaningful change.

An article the other day pointed out the problem well as well. Though it was related to the increasing lack of people who understand the code underneath the current flavour of the month bullshit framework.



> Software devs consistently contribute to the environmental problems

It doesn’t help that schwabists try to provide indexes such as “Java consumes more than JS”.

This kind of sentences, measuring the immeasurable like a square meter takes more energy than a liter, communicates “We’ll dominate you with power while understanding nothing of what you do, and tax you for using Java”. I bet it’s Facebook’s PHP lobbyists who came up with that.

If heating is a problem, then by all means, tax CPU time. But I already know the tax won’t matter because software provides such an immense value to society. We already pay average engineers $800 a day! But we shouldn’t try to get rid of a language.

If they knew what they were doing, they’d certainly ban NPM. But of course there are already GitHub actions and Sonar extensions publishing the CO2 consumption index of Java programs...



I wouldn't take the approach of banning languages.

But certainly Microsoft shouldn't be allowed to carelessly use a public, scarce resource like freshwater on such a scale as they did during a drought in order to train a model[0] they didn't have a use-case for yet? They should at least pay for the use of that resource to help the local community that now has to deal with the consequences.

The company claims it will "replenish more fresh water than it consumes by 2030," some how... but that's too little too late. And whose to say they will keep that promise? There's no consequence to them if everyone forgets about it and they never follow through. No better time than the present to course-correct.

Resource-extraction companies are like this too. They'll exploit the environment and pollute like crazy unless they are forced not to. They will fight tooth and nail to avoid regulation since that slows or limits profits. But they don't care about people or the environment. Neither do technology companies.

Small gains in efficiency at the level of user-space code has been shown to translate into marginal gains in profits in domains such as ads and securities trading. Data centres also look for efficiency gains here to reduce their operating costs where possible. However I suspect we're getting close to the limit here in terms of gains and motivation. There is still a significant amount of, "throw more machines, air conditioning, and fresh water at the problem until it goes away," kind of thinking since the environmental impacts of those decisions have no consequence to the ones making them.

There's no need to measure the energy usage of the JVM if it's doing useful work. However there probably should be more guidelines for where it's appropriate to set up data-centres and taxing them for the resources they use and pollution they create.

[0] https://futurism.com/critics-microsoft-water-train-ai-drough...



yep. but also mundane and cyclical factors like the much higher cost of money after decades of ZIRP are likely to "encourage" people to do more with less.


> ...with multiple decades of massive experience...

You're assuming that the people with this experience have a) successfully passed it on, and / or that, b) they're still coding.

We don't have an apprenticeship model in software development that would lend itself to such a thing. Each new crop of developers has to relearn the lessons in whatever new stack they happen to encounter the issues. It would be like each new generation of carpenters having to relearn their trade because the wood had entirely different characteristics every time someone went to use it.

That experience you mention applies to the general cases, but again, the people that have it may not be the ones doing the work hands-on any more.



There's hardware bloat too. Old fart example: In the days of analog SD TV, the "foolproof" way to feed video into your TV was an RF modulator. The "proper" way was via direct video input of some sort. The "even more proper" way was an RGB type interface.

Except by the end it hardly mattered any more. RF modulators and tuners had gotten so good, that perfectly adequate video resulted from the RGB -> composite -> RF -> composite -> RGB chain. Bloat, but who cares?

In an automobile, the "proper" way to charge a phone is to have a 12VDC->USB type of adapter plug. The "bloat" way is to have a 12VDC->120VAC inverter and then plug the phone's existing wall charger into that. More circuitry, but it gets the job done and it's cheap.

If you like designing electronic gadgets, the "proper" way to flash an LED is two transistors and a couple of resistors and capacitors to build an astable multivibrator. The modern way is to program up a small 8-pin microcontroller. A CPU running thousands of lines of code just to blink an LED? Who cares.

If your computer/tablet/phone are reasonably recent it's the same for software. It's only when your gadget is a few years old that you really see the bloat as formerly performant software "degrades" in later versions.



"Except by the end it hardly mattered any more. RF modulators and tuners had gotten so good, that perfectly adequate video resulted from the RGB -> composite -> RF -> composite -> RGB chain. Bloat, but who cares?"

Eh, no it didn't. That would just look horrible.



This was with a DVD recorder unit. Composite or RF? It really didn't look that different (and both looked good, by SD video standards).


> VC funded startups must get to market first or die. Fake-it-till-you-make-it is their religion.

Sure, that's valid. Though most of the world is not comprised of startups. That's mostly a USA thing with a few small exceptions out there as well.

> How come "ruthless market forces" don't rectify this wastefulness?

They do, though people mistake those forces for "incumbents with enough power to influence the market forces". So we're witnessing the reality which they dominate.

I had plenty of examples in my career where better code helped the company long-term but as we all know, there must be a leader somewhere who understands the tradeoffs and give a green light every now and then.



I think software is so complex that the house of cards eventually falls and ruins the company. Then everyone is mad and the workplace becomes toxic. Most companies fail so eventually the market does rectify it but it can be a long slog until that happens, at least when there was so much money being thrown around.

But the quality of the codebase definitely had less effect in the company’s success than you’d expect, like you said. The costs are just shuffled down to the customers and developers while everyone else gets rich.



Ask yourself whether you want to be remembered for selling millions of copies next year, or leaving behind something that in time might be cherished by generations.

Look outside software and see what things has been deemed quality and why.

Usually the people doing quality make very few comprises and often they don't do it on purpose.

Quality solely starts with yourself. Only you can guarantee within your own merits and experience what quality is.

Explaining quality is thereby difficult because it is so determined by the personal traits and experience.



My personality trait: I need to feed my family.


Quite simply: you don’t ship code, you ship features. You don’t ship automated dependency injection. You don’t ship elegant abstractions. You don’t ship cool compiler tricks. You ship new stuff for customers that they’ll pay for. Your high unit test coverage that makes any refactor a painful slog gets in the way of shipping features far more often than the TDD zealots are willing to admit. Most of the “best practices” in the industry is unrealistic dogma created by people in post PMF, entrenched companies that no longer need to do much to print money.


> Your high unit test coverage that makes any refactor a painful slog

Huh? Unit tests are critical to be able to perform a major refactor without breaking everything.



Only when your refactor only touches business logic underneath the public API. If you do a true rewrite that breaks tests, getting your coverage back up to what it was before the refactor just delays your ship date. And many engineers WILL get green bar syndrome once that coverage % goes down, losing sight of real business goals (staying alive and making payroll). I’ve seen lots of codebases where obsession over unit tests lead to tests that were orders of magnitudes more complex than the system under test. This is incredibly common in multiple process or multi device software, where there’s some sort of custom protocol over an IPC or network or RF layer. You’ll invariably see massive amounts of effort put into mocking complementary parties inside a unit test framework, when the sane thing to do is to just write a 5 line bash script for an integration level test. Or — brace yourself — just do a manual test and ship it today.


That's how MBAs talk, not how engineers talk.


Engineers understand that the entire discipline is about tradeoffs. It would be extremely easy to build reliable, secure, performant, robust, etc software given infinite time and infinite budget. Engineering is about working in the real world, where you don't have infinite time or budget.

Is it correct to build a piece of software that runs 50% slower, but can be built in 6 months instead of 12? The answer is "it depends" and good engineers will seek to understand the broader context of the business, users, and customers to make that decision.



A conflict as old as time. Unfortunately, it's the MBA thinking that pays the company's bills.

An ideal world isn't one in which either the MBAs or the engineers win. It's one in which they coexist and find a reasonable balance between having more useful features than the competition and not expending too much effort to build and maintain those features.



No, it’s how anyone that’s actually worked in an SMB / non F-1000 / non household name company talks. Most “regular” companies need to focus on getting features out the door.

I’m a software engineer, not an MBA.



You are not anything until you act as something. Acting like an MBA makes you virtually indistinguishable from one. I don't care what you studied, it's what you're doing (or not) with it.


Your entire stance is no true Scotsman with ad hominem. You realize almost every YC company operates in the way I described until they become entrenched in their domain, right? Do you think pre PMF companies are bickering about unit test coverage? If they are, they’ll fail


It's really a basic point that I can call myself whatever I want, but if I walk and quack like a duck, I'm a duck.

YC is a VC firm which is a very specific context that demands things like revenue and growth to be so prioritized as to be implicit and core features of the working ideology. That's not really a good model for software engineering when it comes to social benefit -- it prioritizes something else. It can demonstrate incidental social benefit but that's not actually an incentive that's built into the system that YC operates in and reflects internally.

There are Scotsmen out there, they're just not part of this discussion. That doesn't make anything I said a "no true Scotsman" argument.



Here's a repo for you with no test coverage and no auto-generated DI. They're using unsafe pointers all over the place, too!

https://github.com/id-Software/DOOM

Shall I prepare the postage for the letter in which you'll call John Carmack an MBA? Should we send another to Chris Sawyer? I heard he didn't even write a formal design doc for Roller Coaster Tycoon!



You've either intentionally cast my argument as something it's not or you need to read a little deeper before replying.

Nowhere did I say sloppy code is the problem, what I said was justifying it in terms of profitability is the problem. There's a difference between cranking something out because you're excited to show it and rushing through an implementation because of this or that externally defined money-based KPI.



And the answer is the same as the answer has always been. When the market fails, you need to look to law and policy. It's really that simple.

We've done it a bit, Apple's little wrist-slap for slowing down the iPhones, for example.

We just need more of it. I know it's all anti-Libertarian or whatnot, but "more regulation" has worked quite a bit in the past and present. Just do that.



I agree. And this is why I believe most of the points made by Jonathan Blow don’t matter.


If the goal is to make money, you will make money. If the goal is to make quality software, you will make quality software. Sometimes those two goals are in alignment, most of the time they are at odds.


in my experience, when it comes to picking two out of: quality, cost, or speed to delivery, businesses always choose speed and cost. I dont' like it, but I just grew to accept that. The reason why physical engineering seems focused on delivering quality is because of strict regulatory oversight, which for better or worse, we lack in software.


good quality software (say, based on well designed, documented, tested etc. building blocks) can lower costs and improve speed to delivery in the longer run (less need to refactor, fewer bugs, easy to reuse, extend etc. etc.)

the trouble, empirically speaking, is that this "longer run" is not close enough to weigh on decisions :-)



Honestly, with some adjustments, the same incentives also hold true in every industry.

I would personally put the low quality of our code more down to immaturity and lack of tooling.



ZIRP is how come the market forces don’t rectify it. Market is mis-allocating capital on purpose, result is “Zombie firms”


I have a still barely usable HP MS200 all-in-one machine. I got it cheap at a garage sale in 2017. It wasn't fast, but with Linux on it, once Google Chrome was finally loaded, it was OK, even to the point of running the web version of Skype for fullscreen video chats, certainly for watching fullscreen Youtube. It went off to the in-laws as a Youtube watching and Gmail station.

Recently it came back to me. And with the old, 2017 vintage software on it, still worked as it did then. But before making it a kiddie computer, I installed FC38 and the current version of Chrome.

But Youtube videos were now "slide shows". No amount of fiddling with the settings made them play right. Finally gave up and changed the RAM from 2GB to 3GB - I just happened to have the right (laptop) memory card to do that.

And that brought it back to the old, barely adequate (720p fullscreen without noticeable skips) performance. 1.5x as much memory, an extra gigabyte, to do the same thing as six years ago.

"So get with the program and buy an adequate machine! Don't you know that 16GB is the absolute minimum to get anything done these days?" Sure - I have machines with 16GB+ in them. Even on the crappy machine though, Google Chrome is showing a memory footprint on the order of 30GB. I'm sure most of that is mmap'ed files; it sure isn't RAM. But 30GB. For a mostly idle web browser.



Youtube is a great example of the performance difference. Videos have risen in quality because of modern codecs, but old or cheap machines need to decode them in software (sometimes in Javascript, which is terribly wasteful but works on every machine) because they lack a modern GPU. On the other hand, for most devices, the increased battery life, lower temperatures, and smaller RAM/disk space requirements are obvious improvements. Youtube could leave duplicate files with the old encoding on their servers (if you use an alternative frontend for Youtube, you can often see the old file formats still being available for old content!) but that's just wasted space with most of the world visiting from more recent devices.

Most RAM usage increases in Chrome have come from advancements in the sandboxing architecture. Shared memory and process space could be used to attackaand bust out of sandboxes, so more isolation was added. I'm sure there are some more useless features leading to a jump in RAM usage as well, but the really big ones always seem to be the fact Chrome spawns a new, independent process for every tab/extension.

I've also noticed how much impact ad blockers have these days. With the web becoming ever shittier, effective adblockers have become harder to make and need more resources to do their job.

Software doesn't get slower just to make your day harder. In many cases, slow software is a result of replacing dangerous hacks by good implementations and changing requirements. FLV videos just don't cut it anymore, and I doubt h264 will be around in five years with h265 and AV1 making it's way to more and more devices.



Excellent remark. For each "I needed an extra gigabyte of RAM in my computer to do the same thing as six years ago" complaint there is normally a valid reason like this. It's always not exactly the same thing as six years ago, really.

Oh, and the same applies to the original blog post, too.



AV1 is a massive flop in my opinion. What was supposed to be the be all end all open source video format of the future just recently barely starts to pop up here and there. There are tech demos and talks of Youtube etc. using AV1, but in the same breath they say it's mostly for really low bitrate where the AV1 gains the most. Also I'm sure no one is looking forward to burn CPU time (or very lately, GPU time) on encoding 4k+ streams. Is that even financially viable? You save 30% on bandwith and spend 10000% more on server HW time. (I'll admit that these numbers are pulled from my ass, but they're based on a few years old knowledge of AV1 encoders being like 350-3500 times slower than h264 encoders, depending on optimization.)

The entire "there was no usable encoder and no hw decoder for years" thing didn't help and even now it's still all slow and complex, who has time and money for that?

AV1 is just too complex for what it produces in comparison with the good old h264 or vp9.



AV1 performs just fine, the problem is that it started gaining popularity at a time that new hardware has exploded in cost, especially for computers. h265 has been in computer hardware for a while, but as I have found out trying to play some special h265 stream on my laptop in the train, software decodes will easily peg the CPU as much as AV1 does.

h265 hardware decoding came out in 2015 with Nvidia's 9xx series, while AV1 took until the unaffordable Nvidia 30xx/Intel Xe/RDNA 2 generation to become available. Encoding support took even longer (RDNA3/Xe 2/Arc/40xx). In a few years, I'm sure it'll be as popular a format as h265 is today.



Web browsers in particular have become operating systems unto themselves. The "modern web" has so many features, often implemented in half a dozen competing ways... Just text rendering on modern hardware can be a HUGE lift.

Modern toolchains are all about "not reinventing the wheel" -- but when each dependency has picked a different version of said wheel, just pulling in a few dependencies leads to 6 different implementations of the same low-level features.



This is one of the reasons the announcements about WebGPU made me cringe. Yeah, just what I need, more unnecessary complexity added to webpages. Especially when there's seemingly a chance people will use them to mine crypto on my GPU while I'm on their page.


I forgot to mention this: Every time I install a new Linux system, I give the "factory" UI a chance before, inevitably, giving up and switching to MATE. Well, Gnome Shell was actually pretty crisply responsive on this clunker. And it has an "app store". And that has Chrome in it. Well well! But that installed a FlatPak. Bloat, bloat, bloat. Luckily Chrome is still directly "natively" installable as an RPM that actually uses the OS's shared libraries.


Sure it’s using system ones and not the vendored ones? I mean chrome as packaged by Google, not, say, Fedora’s Chromium (which is bent and coerced to use system libraries as much as possible).


Good point, so I checked. It has a few private libraries but for the most part uses the system ones. I think I can paste this output without leaking anything personal...

linux-vdso.so.1 (0x00007ffd353a6000) libdl.so.2 => /lib64/libdl.so.2 (0x00007f852c4ea000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f852c4e5000) libgobject-2.0.so.0 => /lib64/libgobject-2.0.so.0 (0x00007f851eba4000) libglib-2.0.so.0 => /lib64/libglib-2.0.so.0 (0x00007f851ea69000) libnss3.so => /lib64/libnss3.so (0x00007f851e92b000) libnssutil3.so => /lib64/libnssutil3.so (0x00007f852c4b2000) libsmime3.so => /lib64/libsmime3.so (0x00007f851e8ff000) libnspr4.so => /lib64/libnspr4.so (0x00007f851e8bc000) libatk-1.0.so.0 => /lib64/libatk-1.0.so.0 (0x00007f851e892000) libatk-bridge-2.0.so.0 => /lib64/libatk-bridge-2.0.so.0 (0x00007f851e859000) libcups.so.2 => /lib64/libcups.so.2 (0x00007f851e7ba000) libgio-2.0.so.0 => /lib64/libgio-2.0.so.0 (0x00007f851e5e0000) libdrm.so.2 => /lib64/libdrm.so.2 (0x00007f852c498000) libdbus-1.so.3 => /lib64/libdbus-1.so.3 (0x00007f851e58d000) libatspi.so.0 => /lib64/libatspi.so.0 (0x00007f851e550000) libexpat.so.1 => /lib64/libexpat.so.1 (0x00007f851e51f000) libm.so.6 => /lib64/libm.so.6 (0x00007f851e443000) libX11.so.6 => /lib64/libX11.so.6 (0x00007f851e2fb000) libXcomposite.so.1 => /lib64/libXcomposite.so.1 (0x00007f852c491000) libXdamage.so.1 => /lib64/libXdamage.so.1 (0x00007f852c48c000) libXext.so.6 => /lib64/libXext.so.6 (0x00007f851e2e6000) libXfixes.so.3 => /lib64/libXfixes.so.3 (0x00007f851e2dd000) libXrandr.so.2 => /lib64/libXrandr.so.2 (0x00007f851e2d0000) libgbm.so.1 => /lib64/libgbm.so.1 (0x00007f851e2bf000) libxcb.so.1 => /lib64/libxcb.so.1 (0x00007f851e292000) libxkbcommon.so.0 => /lib64/libxkbcommon.so.0 (0x00007f851e249000) libpango-1.0.so.0 => /lib64/libpango-1.0.so.0 (0x00007f851e1e2000) libcairo.so.2 => /lib64/libcairo.so.2 (0x00007f851e0c6000) libasound.so.2 => /lib64/libasound.so.2 (0x00007f851dfb7000) libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f851df9c000) libc.so.6 => /lib64/libc.so.6 (0x00007f851dc00000) /lib64/ld-linux-x86-64.so.2 (0x00007f852c512000) libffi.so.6 => /lib64/libffi.so.6 (0x00007f851df8f000) libpcre.so.1 => /lib64/libpcre.so.1 (0x00007f851df17000) libplc4.so => /lib64/libplc4.so (0x00007f851df10000) libplds4.so => /lib64/libplds4.so (0x00007f851df0b000) libgssapi_krb5.so.2 => /lib64/libgssapi_krb5.so.2 (0x00007f851deb2000) libavahi-common.so.3 => /lib64/libavahi-common.so.3 (0x00007f851dea4000) libavahi-client.so.3 => /lib64/libavahi-client.so.3 (0x00007f851de8f000) libgnutls.so.30 => /lib64/libgnutls.so.30 (0x00007f851d800000) libz.so.1 => /lib64/libz.so.1 (0x00007f851de75000) libgmodule-2.0.so.0 => /lib64/libgmodule-2.0.so.0 (0x00007f851de6e000) libmount.so.1 => /lib64/libmount.so.1 (0x00007f851de27000) libselinux.so.1 => /lib64/libselinux.so.1 (0x00007f851dbd5000) libsystemd.so.0 => /lib64/libsystemd.so.0 (0x00007f851db03000) libXi.so.6 => /lib64/libXi.so.6 (0x00007f851de15000) libXrender.so.1 => /lib64/libXrender.so.1 (0x00007f851daf6000) libwayland-server.so.0 => /lib64/libwayland-server.so.0 (0x00007f851dae0000) libstdc++.so.6 => /lib64/libstdc++.so.6 (0x00007f851d400000) libXau.so.6 => /lib64/libXau.so.6 (0x00007f851de0d000) libfribidi.so.0 => /lib64/libfribidi.so.0 (0x00007f851dac0000) libthai.so.0 => /lib64/libthai.so.0 (0x00007f851dab5000) libharfbuzz.so.0 => /lib64/libharfbuzz.so.0 (0x00007f851d729000) libpixman-1.so.0 => /lib64/libpixman-1.so.0 (0x00007f851d67d000) libfontconfig.so.1 => /lib64/libfontconfig.so.1 (0x00007f851da66000) libfreetype.so.6 => /lib64/libfreetype.so.6 (0x00007f851d335000) libpng16.so.16 => /lib64/libpng16.so.16 (0x00007f851da2d000) libxcb-shm.so.0 => /lib64/libxcb-shm.so.0 (0x00007f851da28000) libxcb-render.so.0 => /lib64/libxcb-render.so.0 (0x00007f851d66d000) libkrb5.so.3 => /lib64/libkrb5.so.3 (0x00007f851d257000) libk5crypto.so.3 => /lib64/libk5crypto.so.3 (0x00007f851d655000) libcom_err.so.2 => /lib64/libcom_err.so.2 (0x00007f851d64e000) libkrb5support.so.0 => /lib64/libkrb5support.so.0 (0x00007f851d63d000) libkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x00007f851d636000) libcrypto.so.1.1 => /lib64/libcrypto.so.1.1 (0x00007f851ce00000) libresolv.so.2 => /lib64/libresolv.so.2 (0x00007f851d622000) libp11-kit.so.0 => /lib64/libp11-kit.so.0 (0x00007f851d125000) libidn2.so.0 => /lib64/libidn2.so.0 (0x00007f851cdaf000) libunistring.so.2 => /lib64/libunistring.so.2 (0x00007f851cc2a000) libtasn1.so.6 => /lib64/libtasn1.so.6 (0x00007f851d10d000) libnettle.so.8 => /lib64/libnettle.so.8 (0x00007f851cbd7000) libhogweed.so.6 => /lib64/libhogweed.so.6 (0x00007f851cb94000) libgmp.so.10 => /lib64/libgmp.so.10 (0x00007f851caf1000) libblkid.so.1 => /lib64/libblkid.so.1 (0x00007f851cab9000) libpcre2-8.so.0 => /lib64/libpcre2-8.so.0 (0x00007f851ca1d000) liblzma.so.5 => /lib64/liblzma.so.5 (0x00007f851c9f1000) libzstd.so.1 => /lib64/libzstd.so.1 (0x00007f851c942000) liblz4.so.1 => /lib64/liblz4.so.1 (0x00007f851c91e000) libcap.so.2 => /lib64/libcap.so.2 (0x00007f851d103000) libgcrypt.so.20 => /lib64/libgcrypt.so.20 (0x00007f851c7e2000) libdatrie.so.1 => /lib64/libdatrie.so.1 (0x00007f851d0fa000) libgraphite2.so.3 => /lib64/libgraphite2.so.3 (0x00007f851c7c1000) libxml2.so.2 => /lib64/libxml2.so.2 (0x00007f851c637000) libbz2.so.1 => /lib64/libbz2.so.1 (0x00007f851c624000) libbrotlidec.so.1 => /lib64/libbrotlidec.so.1 (0x00007f851c616000) libgpg-error.so.0 => /lib64/libgpg-error.so.0 (0x00007f851c5f0000) libbrotlicommon.so.1 => /lib64/libbrotlicommon.so.1 (0x00007f851c5cd000)



Um, okay then. It's just that Chrome's source tree has a hefty 3rdparty/ directory with everything in it, and it's not easy to say when it falls back on rolling its own when building.


I remember needing 8MB to run a graphical web browser with Linux/X11 (I believe it was Netscape Navigator). 8MB was sufficient, 16MB a bit more comfortable.


This article doesn't even touch on the main pain point: bugs. Virtually all software is just as buggy as can be. I dread every time I need to take a piece of software down a non-happy/non-common path. It almost always fails. Working around and dealing with bugs is just a normal, every day part of modern society now.

Simple example, I sold my car to Carvana the other day and just baaaarely pulled it off using Chrome and Firefox. In Chrome the upload image wizard would get a JS exception. That part of the app miraculously worked in FF, but virtually the entire rest of the site was mired in issues as it's obvious Carvana devs don't test in FF. I pulled off the transaction by bouncing between the two.

Even worse, most non-technical people think they did something wrong when they encounter a bug.

Software that is bloated and slow but stable and rock solid? I'd gladly take it at this point.



On one hand I cannot believe what we have actually works as well as it does. Duct tape and bubblegum everywhere at all levels and it actually still works. I’m amazed everyday at what humanity has accomplished with that in mind. On the other hand I can’t help but notice just how damn buggy everything is, and I’m not sure if everything is actually more buggy or if I’m just losing patience with big business software development as I age having seen how the sausage is made. I can’t help being angry at some illusory product person telling people to ship feature X or else with it having a glaring bug in the UX that is easily caught.


The vast majority of humanity has always run on duct tape and bubblegum. :)

There are still computer systems designed with high reliability and accuracy. See flight control systems.

New technology typically doesn't work on duct tape and bubblegum. So it has to be good. Current technology is going to be just right enough to work.



https://en.wikipedia.org/wiki/Speed_tape

It can be hard to tell what is "duct tape" and what is "speed tape" in software these days. An aircraft mechanic doesn't need to know the structural differences between the two - just use speed tape to be safe. Similarly, a pure programmer doesn't have to know the difference between ufw and Windows Firewall - just use the latter and move on.

However, an engineer (mechanical or software) better understand the differences in both situations



You're 100% right. Ever since watching the Jonathan Blow talk on the end of the world I have started paying attention to it and it's amazing how many absolutely shitty software experiences we just accept on the daily.

I understand all the points about financial costs and opportunity costs and pragmatism and the rest, and I partially agree with it, but it's hard not to sometimes feel like we've accepted living in a half built world.



Agreed. I remember as a teenager when the first generation Macbook Airs came out, my friend realized you could reliably crash them (brand new, the demo models sitting out at the mac store) just by opening all the apps on the dock in quick succession. Took just a few seconds of clicking and the machine would crash and reboot.

And it's not like things have gotten any better, now I'm an adult with young children and if one of them gets their hands on my phone or laptop, they seem to be just as reliably be able to lock up, crash, freeze modern devices all the same. These kids are not physically damaging the phone in any way, they're just pushing buttons too quickly or in unexpected orders. That's the state of modern tech that we're in.

To be honest, I can understand why this hasn't been fixed. As mentioned in the article and throughout these comments, we've just come to expect to need to restart every now and again. In this case, people are likely to blame such a problem on the child, and a restart fixes the issue anyway. I just would've hoped for better by now in the lifecycle of these operating systems.





I don't think software is anywhere near as buggy as it was in the past. Some of us had to support Windows machines in the late 90s and early 2000s.


That may be true. But software is also much more ubiquitous and essential these days. At least in the 90s you could be confident your car wasn't buggy.


American cars in the 90s were terrible. The domestic automakers wouldn't be in business today if people actually bought based on reliability. Now you can buy almost any brand and reasonably expect it to last over 100k miles.


Sure, but that had very little to do with software.


Yeah, and I shit you not, many vehicles have had software updates to solve oil consumption issues. People walk away from those service encounters just shaking their head wondering how in the fuck software was causing their car to lose/burn oil. Truth is, several different reasons.

The number of vehicle software updates are staggering.



When the networking stack got worked out by Windows 98, everything was pretty smooth sailing. Until the drive-bys started. Then we moved to Windows NT and complexity, and albeit security, increased significantly. Windows XP was stable as fuck until the root kits started flowing. We moved through Vista, hopefully avoiding it all together until we landed at the fucking dream that was Windows 7. I would have stopped right there and been happy.

Then something got pushed over the cliff. Windows 10 and the UWP...like seriously WTF!? Satan's very own dumpster fire.



But if the software keeps getting modified and expanded it will not be stable. Bugs will be added. And that also causes bloat. I think they go very much hand in hand (and very much correlates with the frequency of upgrades... usually upgrades that no user asked for anyway).


Not sure I agree with that. It's possible to add new features with minimal or even no bugs. It requires good engineering, good test coverage, and often good manual QA and even alpha/beta rounds. But it can be done. Is it done today? Sure, some organizations absolutely follow this approach. I would guess, especially in the commercial space, these things are often lacking.


Heck, half the time even the happy path is bug filled.


The upload wizard wasn't an adblock problem? Anecdotally, I have encountered several cases where that has been the issue.

Not testing on Firefox just makes sense given how niche it has become. Not worth the effort to go beyond Chromium.



I disabled my ad blocker and cleared out all caches and it still persisted. Even if it was, the JS exception was unhandled. Carvana is built in React, the absolute bare minimum would be an ErrorBoundary at the top of all flows (in this case, a modal).


I didn't say it in the post I wrote about C [1], but this is a big reason why I use C: I will have a hard time bloating my software. I can add features, yes, but adding a singular feature struggles to add even 100 kb to the executable.

I don't work for anyone right now, but I do have a "work" machine. This machine is beefy.

But I still run Neovim and tmux instead of an IDE. [2]

I don't run a typical Linux distro; I use a heavily-modified Gentoo, and that includes using OpenRC over systemd. [3]

I don't use a full desktop; I run a TWM called Qtile. [4]

All of this is so my machine is not bloated. When my machine boots up, and I just barely log in, it is running only 40 processes (including the terminal and htop I use to check).

As of right now, I'm engineering software. Truly engineering; I am spending the effort to effectively mitigate all of C's problems, while keeping the software lean and fast. I hope to someday build a business on that software.

I guess I'll see if there even is a market for non-bloated, sleek software anymore.

[1]: https://gavinhoward.com/2023/02/why-i-use-c-when-i-believe-i...

[2]: https://gavinhoward.com/2020/12/my-development-environment-a...

[3]: https://gavinhoward.com/2023/06/an-apology-to-the-gentoo-aut...

[4]: https://gavinhoward.com/2023/09/lessons-learned-as-a-user-3-...



>, but this is a big reason why I use C: I will have a hard time bloating my software.

Your perspective is interesting because I'm old enough to remember when the C Language was considered bloat compared to just writing it in assembly language.

Examples of 1980s programs in assembly was WordPerfect, Lotus 123, MS-DOS 1.0. SubLogic Flight Simulator (before Microsoft bought it) was also in assembly.

Back then, industry observers were saying that MS Word and MS Excel being written in "bloated" C was a reason that Microsoft was iterating on new features faster and porting to other architectures sooner than competitors WordPerfect and Lotus 123 because they stayed with assembly language too long. (They did eventually adopt C.)

I see this "bloat-vs-lean" tradeoff in my own software I write for my private use. I often use higher-level and "bloated" C#/Python instead of the leaner C/C++ because I can finish a particular task a lot faster. In my case I'm more skilled in C++ than C# and prefer leaner C++ executables but those positives don't matter when C# accomplishes a desired task in less time. I'm part of the bloated software problem!



For your own private use, I see no problem with that.

It's all tradeoffs; and finding the sweet spot for every particular insurance.

I believe the article is complaining that people just blow past the sweet spot.

In your case, C# is the sweet spot. In my case, I expect customers will want speed; I'm building an interpreter for a programming language.



is there not, a minor hint of irony here, that when I think of bloated software my mind goes almost immediately to everything Microsoft creates (except VSCode, which is somehow the most performant software they've created despite being written in a language that is theoretically slow)


There was a time when Microsoft wrote extremely lean code, and BillG in particular. Best example: TRS-80 Model 100.


Yes, DONKEY.BAS was legendary... /s


> Your perspective is interesting because I'm old enough to remember when the C Language was considered bloat compared to just writing it in assembly language.

Both perspectives are correct for their respective times. Compilers were much dumber in 80. These days your GUI desktop program written in assembly language would probably run slower than written in C and compiled with modern gcc O2.



C probably was bloated then. But today it’s the backbone of everything, whereas python will never be that, at least on hardware at the levels we can foreseeable build.

Languages like C brought a massive benefit to accessibility. Devolving “software is slow” to “yeah but C vs assembly” is such a ridiculous crutch argument. Assembly is not remotely approachable to the majority of programmers. C, rust, zig, c++, Java, C# are all approachable languages that are fast and have great fast libraries and frameworks to work with.

All I can see in the “I can finish the see sharp program faster” argument is that “python vs c++.jpeg” from the 2000’s where half the python was importing libraries, but they wrote the C++ from scratch, and everyone who knew nothing about C++ moved this image around like it was some hilarious joke of C++.



> but adding a singular feature struggles to add even 100 kb to the executable

Try supporting unicode



Heh, that is probably the exception that proves the rule!


> As of right now, I'm engineering software. Truly engineering; I am spending the effort to effectively mitigate all of C's problems, while keeping the software lean and fast.

Can you expand on that?

Which exact problems of C's are you working on solving? Do you mean the language itself (writing a new dialect of C), or the ecosystem (e.g. impossibility of static linking with glibc)? Or something else entirely?



Unsafety.

My blog post about C has a blurb about how I'm really writing in a partially memory-safe portable dialect.

I have bounds checks, structured concurrency (to mitigate use-after-free and double-frees), and a bunch of other stuff.



Since you care about these things, why not try a language that has scoped threads and bounds checking built in, like Rust or Ada?

Actually, I had written more about Rust and Ada, but then I read your blog, and it says you like C better anyway because you find it more fun, so nevermind.

You might also like to use D in its "Better C" mode, which at least offers bounds-checked arrays and sliced, as well as some other features, while being very similar to C.

https://dlang.org/spec/betterc.html



"I've turned down job offers because they were in C++"

Yep



C++ is huge and highly interconnected. No matter whether you like it or dislike it, and how experienced you are with software development, you either invest years into becoming good at it, or stay away from C++ jobs.


If I was going to go back and time and offer younger myself career advice, I would probably say "just stick with C++". It was the most popular programming language when I started my career. It will be the most popular programming language when I end my career.

Think about it. Is there any widely-deployed OS not written in C/C++? Any web browsers? Spreadsheets? CAD software? Text editor?



I think there might be a market mostly with other engineers like you. Lots of people seem to really love this stuff, look at the forth crowd and how much they adore the language.

For me... I don't notice any slowness or excessive battery drain in VS Code, so I just use that. Perhaps there's something deeply wrong with my psychology, but I'm just not very interested in simplicity for its own sake. It's cool, but not an approach I'd want to use every day in real life.



> I will have a hard time bloating my software

I wonder how much of this is just because when a project manager comes along and says "as a system administrator, I want to be able to log all of the keystrokes of the users and reports who's slacking to the boss", you say, "ok, that feature will take about two weeks to implement" and they can't argue with you because it's C so they just go away and leave you alone.



I would say truly engineering would be contextualizing these decisions along a spectrum of tradeoffs and positioning your project on that spectrum according to the constraints of its creators and users. That may put you to one extreme of the continuum, but that doesn't mean people whose constraints land them elsewhere are not doing "true engineering."

"I am going to build the strongest, lightest bridge in the world" is a marvel. "I am going to build a light-enough strong-enough bridge for a cost my client can afford" is engineering.



Hear! Hear! Engineering is a practical discipline - and it is a discipline. It's building for purpose. I appreciate Tonsky's point, and agree with much of it, but engineering means turning out into the world things that are fit for purpose, however that's defined. Purpose is +100 and all the rest is something of less than 100. Mathematics is mathematics, science is science and engineering is engineering.


I agree, except that engineering is"fit for purpose at the smallest cost."

For me personally, I am banking on having multiple business customers for my software. If I consider the cost amortized, I can spend a little more to engineer something better.

But yes, it still has to be fit for purpose. My customers will provide a document that explains what they want to do with my software and on what platforms. If I sign a contract, that means I am committing to supporting their purposes on those platforms.



Sure, if they are considering tradeoffs.

But a sign of considering tradeoffs is that every project seems to be just a little different because every circumstance is just a little different.

When you have a software monoculture or monocultures, it's likely that tradeoffs are being ignored.



The problem is it's trade-offs on top of trade-offs, all the way down.


Trade-offs also get a bit meta with "standardization" one way or another. Sure you could get peak performance if you made custom length and width screws but it boosts costs and makes it harder to maintain with the one-offs. Going with trade-offs is itself a tradeoff which means monoculture breeds a pattern of ignoring the tradeoffs.


The ethical considerations and professional obligations that go into creating a bridge aren't exactly translated into the act of creating software. If we mirrored those in a bridge building analogy the "cost my client afford" would mean the engineers would often find themselves greenlighting a bridge that would fail in sensible conditions in our current world.


No. A civil engineer's response would (should?) be immediately that the client cannot build the bridge for that cost, simply because it will kill someone as a first order effect. That's the ethical consideration. The downsides of janky rendering or poor memory management aren't quite the same unless you actually add up the power wastage and apply it as an impact on global warming. But, those are not a first order effect, and although they should be, they don't have the same influence on the solution.


There's plenty of software that can kill someone as a first-order effect. If you allow second-order, there's even more.

Avionics. Air traffic control. Industrial control. Automotive software. 911 dispatch. Medical instruments. Medical information. Military command and control. Mechanical engineering analysis software.

Lots of software can kill people.



That's not what I'm saying. If a web page takes 2s to render rather than a theoretical minimum of "immediately", for most software that most people write, no one will die, or be hurt, or even feel too bad as a first-order effect. Obviously, if one is writing flight control software, or somesuch, then the stakes are higher, and one ought to be aware of that. That's the practicality calculation. If you're going to kill/hurt/really upset someone, then what you're doing is not fit for purpose (unless that is the purpose :) )


FWIW, the embedded domain usually has a little more focus on efficiency and correctness compared to enterprise software.


I don't understand this position, sorry. A bridge that would fail under expected conditions is straightforwardly not strong enough for its purpose?


the assumption youre making is that strong enough for purpose is always true for any value of "cost my client can afford."

that's what i was trying to draw attention to.



Heh...

  > ... I use a heavily-modified Gentoo ...
Need not say more :)


Hey, it works for me.

Besides, I struggled with sysadmin-type tasks until I bit the bullet and installed Linux from Scratch and then Gentoo. It was one of my best educational investments.



All this work and you don't earn money with your style of engineering. Have you maybe stopped and wondered why you don't?


I would find it very satisfying to do what he’s doing. I’m not sure it needs be judged against making money.


I know exactly why: I haven't started marketing yet. :)

After I get to MVP and actually try marketing, we'll see what happens.



Not to pick on you—this is something I and most other engineers also struggle with—but I think this sums up the reason for most of the complaints in this thread. That is: builders who prioritize marketing outcompete those who prioritize engineering quality.

Making something people want enough to pay for is very difficult; if you are trying to do that while also imposing a bunch of other constraints on yourself that have no clear marketing benefits, your odds of success go down a lot.

Engineering excellence can sometimes be used successfully as a form of marketing in itself and you could well pull this off, as your content is super engaging and well-written.

But I would suggest to every engineer to consider Chesterton’s fence. Why is all the winning software in every category slow and bloated and seemingly unconcerned with engineering excellence? Is it because everyone involved, the engineers that build it and the customers that buy it, are all technical philistine morons? Or is it because the products that win in the market prioritize… winning in the market, and all their competitors that don’t prioritize that for whatever reason fall by the wayside?



You are right on many levels, and I know it.

> But I would suggest to every engineer to consider Chesterton’s fence.

I agree.

In fact, I believe I have considered Chesterton's Fence; I have a plan.

I'd like your opinion on my plan.

> Making something people want enough to pay for is very difficult.

Yes, absolutely.

To fight this, I have spent years, 3.5 years, figuring out what people hate about build systems, and I have designed mine to address those.

While that doesn't guarantee success, I think it may improve my odds. Do you agree?

> That is: builders who prioritize marketing outcompete those who prioritize engineering quality.

Yes, and it sucks.

I hate marketers. I hate marketing. I hate the fact that I'm going to do it. But I have to.

So my marketing with be focused on giving people something for just paying attention.

> Engineering excellence can sometimes be used successfully as a form of marketing in itself and you could well pull this off, as your content is super engaging and well-written.

Thank you for your compliment! I hope so, and my plan is to emphasize this.

I'm going to post four articles on HN, each a week apart, and each will be designed to give readers something substantial, with a blurb about marketing at the end that will be clearly marked as marketing.

* The first will be new FOSS licenses designed to better protect contributors from legal liability.

* The second will be a post on language design and psychology.

* The third will be a deep dive into Turing-completeness and what it means.

* The fourth will be the source code as a Show HN, along with an offer for early adopters.

> Or is it because the products that win in the market prioritize… winning in the market, and all their competitors that don’t prioritize that for whatever reason fall by the wayside?

It's awful, but you are right.

So this marketing push is all I will do for four weeks: preparing and posting, and responding to comments. This is when I will "prioritize winning in the market."

Does this have a chance of working?



"Does this have a chance of working?"

It's a start for sure. But personally I'd suggest changing your mindset a bit from the idea that you'll come out of the code cave for a month to grit your teeth and do marketing. Granted, this is much better than just staying in the cave. But really to run a successful business I think you need to accept at a deep level that marketing, making money, and growing the business is now your main job, and you will permanently need to spend at least as many thought cycles on that as you do on programming.

To wit, while posting those articles and a Show HN sounds like a good plan and you should definitely do that, what will you do if they all fall flat and get no traction? It's a distinct possibility, and I hope you won't just give up.

I'd think more about what you're going to do every week for the next year to get users rather than putting all your chips on an HN launch that may or may not pan out. Even if you do rock the HN launch, you're probably going to have the "trough of sorrow"[1] to contend with after, so I'd think more about how you can make marketing a repeatable part of your rhythm in the long run.

1 - https://andrewchen.com/after-the-techcrunch-bump-life-in-the...



Thank you so much for your answer!

> while posting those articles and a Show HN sounds like a good plan and you should definitely do that, what will you do if they all fall flat and get no traction?

You got me. I was planning on giving up. :P

I am in a place where the only cost to switching projects and trying again in three years is the opportunity cost, so because I'm bad at constant marketing, that's what I was going to do if I got zero traction.

If I got only some traction, I'd weigh my options.

I was only going to worry about long-term weekly marketing if the launch went well enough.

Because I'll be frank: I have no idea how to do constant marketing that doesn't bother people or waste their time. If I would waste their time, I'd rather just throw my own time away and switch projects.

> Even if you do rock the HN launch, you're probably going to have the "trough of sorrow"[1] to contend with after

Good blog post, and yes, I agree. I am expecting the trough as I build up the MVP.



The issue with your strategy imho is that failing to get traction with an HN launch is not that much of a negative signal. Getting to the front page is a bit of a crapshoot and not achieving it doesn't necessarily mean no one is interested in your product--it could mean you got unlucky or just need to iterate on your messaging.

If you haven't gotten it in many users' hands yet, it might be a good idea to try recruiting like 50-100 users first, either one-by-one through email reachouts or in smaller communities where it's less hit-or-miss, like niche subreddits. If some of these users like the product and stick with it, start giving you feature requests, etc., that tells you that you're on to something. Conversely, if you can't get even a small group of users to try it and stick with it using that approach, it's much more of a negative signal than a failed HN launch and probably indicates that something needs to change.

Whatever route you decide to take, I wish you the best with it!

Also:

"I have no idea how to do constant marketing that doesn't bother people or waste their time. If I would waste their time, I'd rather just throw my own time away and switch projects."

That's noble of you, but I would cut yourself some slack. Ideally you would market in a way where you don't bother people or waste their time, but getting users often requires trying things where you risk getting close to that line. Sometimes you might cross over it, but that's just something to learn from.

Trying to market a product while never bothering anyone the least little bit is a bit like trying to be a comedian without offending anyone or to find a romantic partner without enduring some awkward dates. I think it just goes with the territory.



As sad as it makes me, I believe you.

I'll think about what to do.



I think if you succeed, it will be because other developers are investing in you personally. Because they like what you write and how you think, and thus trust you to write good software.

I think patio11 is someone who has used a similar marketing strategy to great success. People hire him because of the quality of his writing demonstrating his understanding.



I think you are absolutely right.

I also think I can't replicate patio11's success! He's a much better marketer and writer than me.



Here is how the main factor incentivizing this sloppy engineering behavior is justified: everything is cast in terms of making money. Revenue is good -- fine. Therefore no revenue is bad -- doesn't follow and you need to justify that somehow.


No revenue generally means the project eventually gets abandoned or relegated to spare time because people gotta eat. So yeah I'd say it's a bad thing.

I'm all for quality engineering. I'm just saying that if your whole plan for a new product is "quality engineering", then you have no plan at all. The horse needs to come before the cart.



Even couching the output as a "product" begs the question. It comes with a whole set of connotations around what's expected in and from such an effort.

What you're describing is a tragedy of the commons from the perspective of social and civilizational benefit. I agree that's what it is, but I think we should be more careful before justifying it.



That's fine for theoretical discussions. The poster I was replying to implies that their goal is to sell a product and make money, so I was speaking to that.


Ever tried HolyC?


Using Arch with GNOME, VIM/NEOVIM. Happy value between usability and resource usage. The programmers on Linux usually care about efficiency. Christian Hegert is improving recently a lot in GNOME. This allows me to use a ThinkPad X220 which is running circles around some modern laptops with Windows. I understand that people prefer the small footprint of C. I like C. My personal preference is C++ (low- and high-level, more safety and flexibility).

The problem is greed - i.e. capitalism. The blog mentions Electron and therefore Chrome and JavaScript. They are awful combination and allow companies to save on programmers. Programmers which handle C, C++, Rust or Python are a small group. Java, C# and JavaScript consume more resources, allow using a lot more stuff quicker (like a drug) and most importantly are forgiving mistakes. So the industry decides to waste the resources on our computers because they don’t need to buy and maintain them. The customer pays twice, for the software and the next computer with more soldered main-memory. The key point is - the software companies don’t pay for hardware or the environmental damage! I’m doing myself a little JavaScript and some Java. Efficiency? Nobody ask for that and I should not spend time on it. There is now law stating that managed languages need to waste resources but a side-effect.

Remember Steve Jobs? I don’t appreciate what Apple does. But he banned Flash for a reason. Resources! Okay. Also bugs. And for the same reason they should ban Electron. Apple devices run faster with less resources because Apple saves on hardware. Why build devices with huge batteries, when you can achieve more runtime with less material and take the same money? The EU needs to enforce side-loading on the iPhone. But I also think Apple should be allowed to keep Blink (and therefore Electron) banned from AppStore. I don’t want see my battery dying because some corporate manager decided to drain it. But if you need Blink? Side load.



Jobs also said native apps were unnecessary and we can all just use web apps. (Despite working on an app store at the time.) So even the prophet himself isn't such a great example.


Yes. Jobs was wrong in many things - we all err permanently.


What usability features of Gnome do you find indispensable that prevent you from moving to a lighter-weight DE, like LXDE, XFCE or Cinnamon?


High keyboard focused usage with focus on the application-window. I specifically like to enter just “terminal” and the terminal open, either a new window or already open on. I also Alt+Tab but with many open applications finding is more convenient. Another thing is that it is also easy to use with the mouse and the overview makes that pleasant (good for novice users).

They eliminated the unappropriate “desktop metaphor” from Windows 95 and the “system tray”.

The negative side is that GNOME often removes options or hides them - which drives away experienced users (the people needed to pull in new users) and some developers to forks. The are right to not support every bewildering option but the needed ones must in place despite the UX people don’t use them itself.

https://ometer.com/free-software-ui.html

Some of this is right. Do stuff automatically right and don’t provide unneeded options to avoid complexity. Some not! Also provide required preferences e.g. “Do not suspend on LID close” because some people don’t want that! The GNOME people assumed it was needed as option because of problems with suspend (S3) in the past. Maybe people used it to bypass issues but that isn’t the use case of the preference. Another thing is “When you’ve five clock applets” I want to know what is missing from the first one which made all the others necessary. GNOME learned that people don’t need an “Emacs” as UI-Shell but somewhat went into the other extremes.

PS: And GNOME just looks good by default. I don’t need themes because it is fine.



I've noticed a similar trend in my current organization. In the beginning, when we were smaller, the details mattered. Things couldn't be slow, animations had to be smooth, scrolling speed mattered, loading times mattered. We aimed to build the most efficient product, anything to increase user efficiency.

Then as the team grew, the values changed to favor anything that improved developer efficiency. More abstractions, more layers, more frameworks. The tradeoff of saving one day of developer work was worth the cost of millions of user seconds collectively. I think the difference was just the visibility - management can see the costs of things on the development side, but they can't see the benefits of a slightly faster launch time, or better caching, or smoother scrolling. They aren't measurable, and once an org gets to the point where all it cares about are measurable numbers, I think this is a natural course.



Now just wait for the deplorable impact of LLM on code quality and performance in the years ahead. There's a new wave of programmers who are "GPT whisperers" and spend most of their time hooked to a chatbot (sometimes right in their IDE) that programs for them.

Of course, that's until AI gets to the point where it can fix everything we (or it) programmed wrong, including AI itself which is highly inefficient just like the OP predicts.



I don't think it's a given that LLM-driven coding is a negative for code quality and performance. In my experience coding with ChatGPT, it often reminds me to think about error handling, edge cases, and performance issues.

It also over-prioritizes readability if anything, using descriptive variable names and documenting every line with a comment. It's maybe not as good as the best engineers at considering all these factors, but I'd say it's better than the average developer, and may be better than the best engineer too when that engineer is tired and in a hurry.



It's not just about being measurable, it's about being profitable. Capitalism results in efficient production, not efficient products, because it's only interested in extracting profit from the production system.

Cars were optimised only after external (oil) shocks were applied, and even then very unevenly (American cars continue to be very inefficient). In fact, inefficient products are often more profitable in practice, as customers have to replace them more often; the crappy iPhone cable that splits after a year means Apple can charge you again (and again) for its replacement. White goods now break more often, but they're built more cheaply so they can make per-unit profit higher than it would otherwise be. What matters is efficiency producing, not after-sale use.

Capitalism in software means churning out new automation as quickly as possible, letting consumers pick up the resulting waste of energy and time. The production chain gets more and more standardized and optimized: you can now swap React developers, or Kubernetes admins, like you can swap warehouse workers, with all that it entails in terms of salary pressure. Some of that automation is effectively self-justifying, in the same way accountants make accountancy terms obscure so they can justify their jobs; but that's about it. Everything else is about profit.



Admittedly, I stopped reading just over half way, but the crux of the argument appeared to be that software should be fast, though I don't recall any justification for that philosophy other than suggesting that it is basically a truth. The article did acknowledge the counter point that in some cases the efficiency gains will never make up for the time spent chasing this efficiency, but it hand-waved that away without interacting with the argument.

The other key trait of the article was cherry-picking data and over simplifying domains. Several times the article alluded to the emotional plea that "what could the software possibly be doing that takes up that time/space" (my paraphrase) but it didn't place any serious attempt to answer that question, using their lack of provided answer as if it were an indication of an invalid answer and comparing various bits of software that do not have feature parity looking as if their only practical difference were performance. (Edit: Updated wording of previous sentence to be more clear.)

There's definitely alot to be said about how software could be more efficient as well as the social, environmental, and business costs of inefficiency, but there is also much to be said about how modern software empowers people that otherwise might not be able to write anything to write something "bad" that does what they need or to discuss how modern software developeres tend to aim to be "fast enough" in the way that an structural engineer would choose "strong enough."

There's much rich debate to be had, but this article didn't include it, instead going for an emotional rant, and failing to engage with actual reason. There is, in my opinion, truth to parts of the argument, but the article made itself clear that it didn't want a discussion.



>Admittedly, I stopped reading just over half way, but the crux of the argument appeared to be that software should be fast, though I don't recall any justification for that philosophy other than suggesting that it is basically a truth.

But it is the truth, and all we have to do is simply look at software from the point of users.

What device did You use to write this comment? Iphone4, or 14? What device do You use as your workstation, some pentium with 4gbs of ram?

Heck, even going outside the software itself, but to related branch - What network did your device talked to servers? 5G/fiber counted in hundreds of Mbps or 2g counted in few kbs?

At the end of the day, actions speak louder then words. And you can pretend all You want that "wanting faster software" is unproven axiom, but if the axiom is followed by literally all of society, it might as well be taken as truth.

...especially since it originates from the exact same place as the "developer time is worth more" one. Only difference is who's time we are saving



It's only truth because no great alternative is provided. My best devices are ones from the past. I gave them up only because software rendered them obsolete, not because I wanted the newer model.

If half of society is speaking a new language AND an old language, it doesn't really matter if the old language is superior. You need to be able to speak the new dialect just to navigate society, even if the new language only exists as a mechanism to differentiate the new generation from the old.



Especially since there's 1 developer and thousands to millions of users..


> What device did You use to write this comment?

Actually? An old iPad on WiFi that is stationed on the other side of several walls.

Honestly, a lot of people don’t mind waiting a second for actions to complete. (For some of us, it’s the only pause we get to take during the day.)

There are enough low-end phone sales that we should be able to accept that some people really don’t mind taking an extra moment if it saves a dollar.

This really isn’t to say that software should be slow, but I think we should acknowledge that speed is not the sole value that users concern themselves with when using software. Many times it isn’t even a primary value.



For the record -

There is no evidence what-so-ever that “performance trades with developer time”. 99% of the time, reasonable performance is a skill issue, not a developer time issue.

In actual fact, if you look at “clean abstractions” and other nonsense, you can see that higher developer time actually seems to equate to lower performance. As we all also know, the best indicator of bug count is lines of code, so adding in all these abstractions that adds lines of code not only make code slower, but also results in more bugs.

That is to say, all current evidence points to the exact opposite of their claim:

Higher dev time = lower performance and more bugs (assuming higher dev time is coming from trying to abstract)



> There is no evidence what-so-ever that “performance trades with developer time”. 99% of the time, reasonable performance is a skill issue, not a developer time issue.

I think this is reductive. There are lots of things people can do in Python that would be slower to write in C, and no amount of skill is gonna close that gap.

> In actual fact, if you look at “clean abstractions” and other nonsense, you can see that higher developer time actually seems to equate to lower performance. As we all also know, the best indicator of bug count is lines of code, so adding in all these abstractions that adds lines of code not only make code slower, but also results in more bugs.

I understand the argument against bad abstractions, but what abstraction is adding lines of code? I think we can agree that most of the time abstractions are to reduce lines of code, even if they may make the code harder to read.

> Higher dev time = lower performance and more bugs (assuming higher dev time is coming from trying to abstract)

I'm sure there are lots of projects that are buggy with little abstractions.



>I think this is reductive. There are lots of things people can do in Python that would be slower to write in C, and no amount of skill is gonna close that gap.

Write python without import statements and then try to make this claim again.

C using libraries is often as easy, or easier than python with imports. You don’t get to let python use someone else’s prewritten code while C has to do it from scratch and then say “see”. It’s a false equivalence.

>I understand the argument against bad abstractions, but what abstraction is adding lines of code? I think we can agree that most of the time abstractions are to reduce lines of code, even if they may make the code harder to read.

I don’t agree with this claim. I’d say go check out YouTube video from people like Muratori who tackle this claim. We have no measurements one way or another, but if I was putting money down, I would bet that nearly all abstraction that occurs ends up adding lines instead of saving them.

>I'm sure there are lots of projects that are buggy with little abstractions.

I’d tend to agree that the evidence shows this, given that the vast majority (and it’s not even close) of bugs are measurably logic errors.



But we're not talking about Python using libraries and C using libraries. We're talking about pythons basic language features and what C patches in with libraries. Which makes a huge difference to me.

> I don’t agree with this claim. I’d say go check out YouTube video from people like Muratori who tackle this claim. We have no measurements one way or another, but if I was putting money down, I would bet that nearly all abstraction that occurs ends up adding lines instead of saving them.

We may not have measurements of the trend, but we can measure it on each abstraction fine. How many lines of code is your abstraction and how many lines would you have to duplicate if it wasn't for your abstraction? If you're not saving that much, don't make the abstraction.

Again, this isn't talking about maintainability. I still question the claim that less lines of code means less bugs, but I know it's a popular one. In my experience, it's the terse code that's doing a lot that winds up being buggy. I prefer things to be longer and explicit, so you're talking to someone that doesn't even like abstractions that much unless they're necessary.

> I’d tend to agree that the evidence shows this, given that the vast majority (and it’s not even close) of bugs are measurably logic errors.

I don't think this premise you propose is obviously false, but we have lots of evidence that memory bugs are the cause of most bugs.



>pythons language features

The standard library is still a library. Python doesn’t provide very much as a language feature.

There are pros and cons to standard libraries. If we were talking about brand new C with not a massive community of quality libraries, then yeah. But we’re not. We are talking about a language with a half century of great libraries and frameworks being provided.

>memory bugs cause most bugs

Yeah no. Not even close. You’re thinking of Microsoft’s citation that 50% of their security bugs are memory safety, but that’s not 50% of all bugs.

Basic standard logic bugs are far and away the largest contributor to bugs.

The way we know this is that language choice virtually doesn’t matter. On the whole, developers write 20-30 bugs per 1,000 lines of code regardless of language, so memory errors simply cannot be the largest contributor to bugs.



I kinda agree, but skill issue can be easily extended to time issue.

Because it takes time to skill up.

And also, if you put higher bar on require skill level, you inherently get less developers, which means more work per developer which then converts to time issue. Obviously it's not a one to one, because great dev will spend less time doing good job then a clueless one making a mess, but still



If that was true, we should all use assembly because C is too inefficient. Abstraction layers are necessary.


> to be that software should be fast, though I don't recall any justification for that philosophy other than suggesting that it is basically a truth

The gist is that if your code takes just a couple of seconds to complete on your fancy M1 Mac, there will be a pretty big chunk of your potential audience who will have to wait for minutes (and there's that surprising character trait in many non-technical users that they simply accept such bullshit, because they don't know that performance could be drastically improved without them having to buy new hardware).

But unless devs test their code also on low-end devices they will be completely oblivious to that problem.

And the actual problem isn't even the technical aspects, but that some devs are getting awfully defensive when confronted with the ugly truth that their code is too slow and start arguing instead of sitting down with their product mangager and making time for some profiling and optimization sessions to see what can be done about the performance problems without having to start from scratch.



What percentage of your target customers are going to have low end devices? And what is their estimated value compared to people who upgrade their tech (like most people with money would)?

Is it a problem or are those people simply not worth serving?



Gamedev quantified this. The answer is: many more then you think.

And sure, they’re not worth serving, as long as you’re not worth their money. Meanwhile a competitor will.

Don’t be fooled by the browsers. Most products aren’t browsers. And if you don’t snap up the long tail, someone else will.



I liked the emotional rant. Sometimes data just obscures what you know to be true. I don't need data to tell me that I should try my best to make the best software, and that's what this article reminded me.


There is one paragraph at the start where I thought "yep, the author just answered his own question here":

Only in software, it’s fine if a program runs at 1% or even 0.01% of the possible performance. Everybody just seems to be ok with it.

So yeah, everybody is ok with it. Move on.



This thread is ample evidence that not everybody is ok with it. Also, note that "being ok with it" and "seeming to be ok with it" are two extremely different things.


I'll leave you with this link, the top IDE index: https://pypl.github.io/IDE.html

Surely you can see that this top list is optimized for features, not for speed. So yeah, people can complain here on HN all they want, in the end the 'evidence' of which software is preferred is pretty clear, even in the programmer demographic.



I didn't know how to put my thoughts about the article into words so decided not to, but thankfully this comment does it better than I could have.

Really annoying how the author brings up counter points and comparisons but does not deeply engage with them, as if they're absolute truths and only rhetorical, when they're far from that.



Huh, the author feels the pain with endless crappy and slow software. If you feel all his points either superficial or wrong you could easily come up with counter arguments better than author. But it seems all you are saying is I don't like it because I don't like it.


> Huh, the author feels the pain with endless crappy and slow software.

If that was all, it would've been fine... if he was only listing bugs and saying he's tired of crappy software. But he has framed the article in a way as if he's describing the "why" and how the problem can be solved. In fact I've been following the author and he has a place where he lists all bugs he finds in software, and that in my opinion is a great initiative, but this article just opens too many threads and tangents and only appears to explain the root cause.

I could come up with a few examples like:

- trying to compare cars, buildings, and planes with software and not diving deeper into how much different they are and why. - or saying everybody seems to be ok with inefficient software without diving deeper into why everyone's ok with it. - or mentioning a tweet about a guy who spent more time trying to make something faster than he will ever gain back without going further into if it's a good/bad thing and why.

> If you feel all his points either superficial or wrong you could easily come up with counter arguments better than author.

I never said his points were superficial or wrong, just that he does not deeply engage with the threads he opens up. For that reason, I have no counter arguments to come up with, since he is just complaining about his issues and also does not come up with a good solution as such. Of course everyone would like nicer software, that's not new. Why is software different than other industries in the first place? Is it worth improving it? What are the trade-offs? What is the effort required at scale? What would we lose if we made software more like manufacturing? What would we gain? If he knows a path forward, does he have a better way to express it than the last "manifesto" paragraph? Etc.



> Admittedly, I stopped reading just over half way, but the crux of the argument appeared to be that software should be fast, though I don't recall any justification for that philosophy other than suggesting that it is basically a truth.

Are you suggesting that maybe users (including you and me) should have to put up with painfully slow, stuttering software? I don’t see why his claim requires any justification - to my mind, it should be just as self evident as the fact that inflicting physical pain upon others should be avoided.

> […] modern software developeres tend to aim to be "fast enough" in the way that an structural engineer would choose "strong enough."

But they don’t aim to make software fast enough. My experience last week: Windows 10 file Explorer took ~2.5 seconds to open. Close it and re-open it: same thing. Open it, right mouse click another folder to open a new Explorer window: same thing. Fresh install of Windows 10 on a top of the line workstation-type laptop with 64gb of RAM.

Not all, but a frustratingly large proportion of modern software is dog shit slow.

The reason doesn’t need to spelled out in the article: if the developers of these slow apps cared, they could figure it out. From my experience, it usually looks something like this:

Some developer has an emotional attachment to Protocol Buffers, and will stop at nothing to see its adoption within the org. But their software is pretty heavily invested in JSON. So they rewrite the software to read the existing JSON files from disk (or REST web service response bodies, whatever), and reserialize them to protobuf in memory. Tada! Now we’re using protobufs, great. Of course, nothing meaningful was actually achieved here - they already had a perfectly fine, ready to use, deserialized, in-memory data structure before they added protobuf to the mix. Oh, and that plain struct in memory was faster than traversing a protobuf: the former had small substructures laid out in the same allocation as the parent, whereas substructures in protobuf involves multiple allocations and chasing pointers. Next step: realize that REST is lame, and gRPC is hip. But it’ll be practically impossible to rewrite everything from REST to gRPC, so they do the only reasonable thing: create proxies that sit between the client and server that translate REST requests/responses to/from gRPC! Now that we have that in place, we can add an additional proxy to the mix: Envoy. Envoy is a super popular layer 7 proxy, so it’s gotta be good. What functionality will be used? Any load balancing? RBAC policies? TLS termination? Nope. None of it. But because Envoy is “good”, adding it to the stack with no justification must also be intrinsically “good”, right? Right!

(Edit: do you see how long winded and boring this example is? This is precisely why the author shouldn’t expound on the “why” - anyone involved in the development of needlessly slow software (who isn’t blind to the problem because they are part of it) can recount similar craziness. Adding this to their blog would make for boring reading, and distract from the aim of their article.)

Buzzword/resume driven development, unwarranted layers of indirection (for no gain), absolution of responsibility via appeal to authority (if the top 10 software companies created and/or use some software, then surely we can blindly use that software too and enjoy the same success, despite not giving any consideration to whether it’s even remotely the right tool for the problem at hand), cargo culting, etc.

The reason software is painfully slow usually boils down to lack of critical, rational thought: either out of laziness and/or deferral of responsibility, or because of some emotional attachment to some type of software component.



> Are you suggesting that maybe users (including you and me) should have to put up with painfully slow, stuttering software? I don’t see why his claim requires any justification - to my mind, it should be just as self evident as the fact that inflicting physical pain upon others should be avoided.

I'm suggesting that the blanket assertion doesn't hold true that slow software is painful software. Software can be so slow that it's painful but "slow" from the point of view of absolute does not immediately make something painful.

A counter-example that the author used when describing people with a pride in inefficiency was to quote:

> @tveastman: I have a Python program I run every day, it takes 1.5 seconds. I spent six hours re-writing it in rust, now it takes 0.06 seconds. That efficiency improvement means I'll make my time back in 41 years, 24 days :-)

I'm also asserting that slow and painful software that does what I want is better than fast software that doesn't or that I can't afford.

Heck, I empathize with the author: when I run Slack there is a perceptible delay when I type. Do I want them to fix it? I mean, no, not really. I'd personally rather they provide an offline search mechanism or a way to write direct CSS for theming. It's something that is annoying but it is less annoying that missing new features or the price going up. Likewise, I could use Vim for editing if I really wanted to, but I'd rather have the featureset of IntelliJ.



I mostly see bloat created for other reasons. Think about situations like Docker containers replacing single applications (and dragging entire Linux userspace installation with them). Deploying in Kubernetes with its own CNI as well as DNS server and a bunch of other networking-related stuff while the network in your datacenter already does all of that. Packaging the whole Python virtual environment into a DEB or RPM package instead of shipping just the library you want users to install.

This bloat is harder to deal with, on organization level, because people creating it justify it by saving on development effort necessary to make the product leaner. There's no financial incentive to not use Docker for deployment (and spend developer's time ensuring the code works on different platforms with different libraries).

And software industry isn't the only victim of this situation. First time I ever encountered this was in a... church. American missionaries coming to the former Soviet republics would bring with them pocket Bible for free handouts. Since I studied printing, to me this pocket Bible was strange in many ways. It was printed on paper lighter than 20 g/m^2. This was unheard of in Soviet printing industry. If it ever tried to produce such paper it would simply fall apart because they didn't have access to the technology necessary to produce plastics that held this paper together. Because the paper was too thin, it required a lot of "filler" (again, more plastic). And that made it worse for recycling. It was printed using offset machine. Soviet industry didn't print literature using offset machines. They didn't have the technology for making precise high-resolution plates necessary for such printing, so letterpress printing would be the way to do it. But, letterpress makes a noticeable difference in texture of the page, it also pretty much prevents you from using "unorthodox" font sizes, ensuring that the font's author could see exactly how letters are going to look on a page, making the overall experience much more pleasant.

All in all, it was kind of a technological marvel I knew I couldn't achieve with what I had / knew, on the other hand, all this technology was intended to decrease cost at the expense of marginal drops in quality. In truth, at the time, I didn't think this way. I saw the technological marvel part, and didn't notice the drops in quality. The realization came a lot later.



> how modern software empowers people that otherwise might not be able to write anything to write something "bad"

What's so special in the modern software? How do you tell if software is modern?

Few points to illustrate the difficulties with your descriptions: BASIC and SQL were meant to empower people ... to write something "bad" a very long time ago. So did Fortran, as well as some other languages / technologies that didn't survive to the present day.

Python or Java can be called "conservative" if you are very generous, but, really, in truth, should be called "anachronistic" considering programming language development that happened in the 70. Languages like J or Prolog are conceptually a lot more advanced than Rust or Go, but have been created much earlier. Many languages are actually collections of languages that have been created over time, eg. C89 through C23 -- does this make C a modern language? Only the C23? Is there really that much of a difference between C89 and C23?

Is there some other way to define modernity? I.e. not based on time of creation nor based on some imaginary evolutionary tree?



I’ll be honest here, time got away from me. I had Node and Python in mind simply having forgot how old Python was (And Node not exactly being the new guy anymore.) :p


That's a perpetual complaint.

The truth is that a lot of conveniences we take for granted have a cost that adds up a lot. A 4K screen has 17 times more pixels than a 800x600 one, and uses 32 bit color. So the raw size of graphics made for a modern display is around 68 times bigger.

Where before a static picture was acceptable now the norm is a high quality, high framerate animation.

Arial Unicode is a 15 MB font, which wouldn't even fit in the memory of most computers that used to run Windows 95.

Spell checking everywhere is taken for granted now.

And so on, and so forth. That stuff adds up. But it makes computers a whole lot more pleasant to use. I don't miss 16 color video modes, or being unable to use two non-English languages at once without extremely annoying workarounds.



This doesn't explain it since within the very same constraints of 4Ks and Unicode etc there are apps that are orders of magtitude more efficient


It does have a cost. But maybe the cost isn’t worth it sometimes. I’ll take the 4K screen please. I’m willing to pay for those pixels.

But animations for no reason other than to make it seem like waiting for something is less of a chore? Nope.

Unicode is something I’m willing to pay for.

But we should be able to draw the glyphs on the screen in single-digit ms like we did in 1981. Yes more pixels and more glyphs but it’s possible just not a priority.



> But we should be able to draw the glyphs on the screen in single-digit ms like we did in 1981. Yes more pixels and more glyphs but it’s possible just not a priority.

In 1981 we drew fixed-width bitmap fonts at low resolution. In 2023, a font is a complex program that creates a vectorial shape, which is carefully adjusted to the display device it's being rendered on for optimal graphical results and antialiasing. That said, performance isn't bad at all.

Just resize the comment field back and forth while having a bunch of text within, and you'll see that text rendering performance is perfectly fine. I see no slowness.



So okay, all the artifacts take much more resources but even after consuming 100 times more compute resources why software is still excruciatingly slow?

This comment is something like even after paying 100K for performance BMW car engineer tells the user car will take 30 sec for 0-60 mph. And since user is not perf expert they have to take it face value.



> So okay, all the artifacts take much more resources but even after consuming 100 times more compute resources why software is still excruciatingly slow?

It's not?

Software used to be way, way slower. I had a 386. I experienced things like seeing the screen redraw, from top to bottom in games running at the amazing quality of 320x200x8 bits. I've waited hours for a kernel build. I've waited many seconds for a floppy drive to go chunk-kachunk while saving a couple pages of Word document. I've waited minutes for a webpage to download. I remember the times when file indexing completely tanked framerates.

Today all of that is pretty much instant.



Everything has a cost. In the cases where the computing cost or degraded user experience is high enough efficiency is optimized for (see ML models for example). In other cases it's not because the end user doesn't actually want that at the cost of fewer features. Cars used to be gas guzzlers until fuel costs and environmental concerns caused customers to want something different.

A lot of engineers forget that they are in fact paid to build a product for customers.

edit: It also seems OP has never had to wait for older Windows or Linux machines to fully boot. Modern versions boot much faster because customers wanted that. Phones are on 24/7 so customers don't care if it takes longer to boot once every few months.



Boot times are still atrocious, especially on newer AM5/DDR5 systems.

My BIOS takes like 20 seconds to POST, and that's apparently normal. (then ~10 seconds for the OS to boot)



Your motherboard may be doing a full RAM check on every POST. There are often option in the bios to disable that.


DDR5 ram has bandwidth of 32,000–64,000 MB/s[1]. Why does it take 20s to check it unless you have a monstrous amount?

[1] https://en.wikipedia.org/wiki/DDR5_SDRAM



I don't know where the POST code is run, maybe not the CPU ? Also, i'd imagine this sort of code to not parallelise the checks because it try very hard to be bug-free, so we are looking at a piece of code scanning the whole RAM linearly, I don't think you reach anything near the max bandwith of your stick of RAM in this scenario. The problem here is that it does that at every boot, it should do it once, you maybe have disable fast-boot options or equivalent in the bios ?


A huge amount of boot time seems to be because BIOS waits several seconds(!) by default for various hardware to get into a stable state after powering up.


Since when customers have any meaningful choice or way to express preferences? It's long been a supply-driven market: vendors make what they want, customers buy what's being put on the market.


I feel like it's gotten even worse with pervasive telemetry and A/B testing. Now customers get more of what they "engage" with even if that's not what they want.


>Since when customers have any meaningful choice or way to express preferences? It's long been a supply-driven market: vendors make what they want, customers buy what's being put on the market.

By your logic Tesla and Apple shouldn't have grown like they did since customers wouldn't pick them over existing incumbents. Customers do have a choice in aggregate and they express that choice. Some people who don't agree with that choice try to say customer's have no choice but in the end they do and the person saying it is upset they're a minority.



It took us literally minutes to boot WordPerfect on our school PCs


That's nothing. I remember when booting a game required loading it from cassette tape. Me and my friends would go round to each other's houses, start the computer loading a game, go outside to play a football match and when we got back it'd be nearly ready to start.


I mean, there's booting, and then there's loading a program...

My Commodore 64 from the early 80s booted to a full BASIC REPL in around 1 second. Yes, loading from removable media was slow, but the computer was ready to fully use in less time than it takes "modern" systems to even POST. Totally ridiculous.



Yeah, when reading the part about slow boot times, I was thinking about back when we still had to boot from floppy...


That's why software often came on ROM cartridges for home computers. Floppy disks and cassette tapes were just a cheap compromise.


> A lot of engineers forget that they are in fact paid to build a product for customers.

In a way yes but I think in many ways software became much more than that. In many ways it became a crucial part of human lives as electricity or medical services. And you can say doctors/engineers are being paid to develop new medicine or a new power sources but these have many many levels of government control just because how sensitive and important the matter is, which is not the case with software (yet?).



Partially true, but let's not pretend that when trends are set by the companies of the size of Google or Apple, they are still following the customer's will the same way some small, or even medium sized company does.

They have enough power to shape the landscape itself, and as such they can drastically steer the outcome of "customer needs"



>companies of the size of Google or Apple

Neither of them used to be the giants they are but overall they made solutions that their customers (who are, btw, advertisers in Google's case) wanted and bought over larger existing competitors. Apple's market cap has increased 500x since the late 90s.



Ye and?

Obviously they entered the market with simply good products, but that doesn't change the fact that now they have a lot of power that comes simply from their size. Im not saying its their only advantage, just that it plays big role.

But also, common now. Today's platform lockins are much more impactful then anything we had in the 90'. And you don't compete with Google's product anymore, but rather you compete with ecosystem that half of the world bought in. Which makes it practically impossible to be a viable alternative



I feel this deeply.

It's important to remember that as developers, we do have a choice. Not about everything, but there's an option to choose the less-sucky alternative.

You don't have to use Node. You can write good, backwards compatible software just fine on the .net or JVM ecosystems and you can know that it will still run without modification in 10 years.

You don't have to write single page webpages. Old-fashioned HTML that completely reloads the page on each click works just fine and is probably lower latency at this point.

You don't have to write desktop apps using Chromium. Getting started with a UI framework is a little more work but the quality is worth it.

The decision isn't always yours. But when it is, opt out of the suck.



The "everything must be a SPA" mentality these days just saddens me. I get that GMail and similarly complicated apps get benefits from being SPA, but I've worked at too many places that just insist on a mess of complexity on the frontend when basic CSS / HTML (and maybe a sprinkling of JQuery) would give them all the same features in _WAY_ less time (and with significantly fewer bugs).


> You can write good, backwards compatible software just fine on the .net ... ecosystems

(Cough)

The .net core initiative broke A LOT of code. Microsoft obsoleted a lot of libraries. (Some for very good reasons, too. A lot of legacy .net libraries had rather poor design choices.)





You can only pick two out of the three from the triad of optimization (time, monetary cost, quality).

Industry prioritizes functional solutions (requirements) over efficiency. If efficiency is one of the requirements, it will be addressed (e.g. video games). Optimizing for efficiency takes additional effort. The article argues that the software industry is stuck with inefficient tools and practices. Engineers can and should do better, aiming for better apps, delivered faster and more reliably with fewer resources. However, economy dictates that as you optimize two variables from the triad, the third will get de-prioritized.

Edit: I can see the downvotes but no idea why. Would you care to explain?



From what I've seen the faster solution is also usually the simplest, which makes it faster to deliver and easier to maintain. We're far away from what Knuth warned about (micro-optimizing assembly). It's more at the level of high-level design, technology choice, etc.

Like at my last job, we had something like 50 microservices, Kafka, rabbit, redis, varnish caching, etc. For under 1k external requests/second and some batch processes that ran for a few million accounts. If we cut out all of the architecture to "scale", you could've run the whole thing on a laptop if not a raspberry pi. And then a real server could scale that 100x.

The company was looking at moving to a "cloud native" serverless architecture when I left.



Agreed. My argument is that software from market/customer perspective is more like a movie, which is supposed to delight them. And industry rightly priorities the capability/delight factor over mere efficiency for efficiency's sake. Only engineers are interested in efficiency for efficiency's sake (also - will the engineers pay for more efficient software? Nope. Why did engineers abandon Sublime for vscode? So when anyone wears a customer hat, they tend to prioritize factors other than efficiency/correctness more unless we are talking about life-saving medical saving devices or such).


I don't think this can even be blamed on what the industry prioritizes, but on what customers reward. How many users pick one solution over another because it's quicker? Outside of very specific use cases practically never happens. These types of complaints ultimately come down to wanting users to care about different things. It's not dissimilar from complaints about nobody having fashion sense anymore or buying too much processed food.


> How many users pick one solution over another because it's quicker?

Isn’t that what drives the most sales in upgrading a phone? Idk many people that care about the microscopic improvements to the camera or UI, most of the time it’s something along the lines of “my iPhone 8 doesn’t run fast enough anymore, time to upgrade”. Same with consoles. I remember one of the big pitches of next gen consoles being “look! No more loading screens!”. And even ChatGPT 3.5 vs ChatGPT 4. There are tons of people that will use crappier output because it’s faster. Speed is still absolutely a selling point, and people do care.



> I remember one of the big pitches of next gen consoles being “look! No more loading screens!”. And even ChatGPT 3.5 vs ChatGPT 4.

Those are the very specific use case I mentioned.

How many users are gonna swap to a different chat client because of this or a different word processor because it start 1s faster? As a parallel comment points out, even developers swapped away from the very quick Sublime to Atom and now VSCode. At work hiring at some point became a huge pain because we swapped from Greenhouse to the recruiting tools built into Workday. It was super slow. I hated it, our recruiting team hated it, but it was purchased because it checked all the boxes and integrated with all our other stuff in Workday. The comparison to engine efficiency made me think that if we really want faster software, we need the equivalent of a gasoline tax for software, but what's the negative externality we are preventing?



Yes, the engineers who verbally bat for efficiency/correctness in these threads have no qualms about practically abandoning efficient sublime to less efficient vscode (in general, of course; not that there aren't exceptions) :)


How much programmer time has been wasted by gdbs byzantine UX? In a rational world, would we collectively invest time into building extremely good debuggers?

It's a tragedy of the commons, where everybody would benefit from better tooling, everybody wastes time dealing with poor tools, but nobody is willing to put time/effort/money into making better tooling.



Like this? https://www.gdbgui.com/

There's plenty of FOSS developers that work for intrinsic rewards and likely produce better software.



I think you’re right that there’s a fundamental tension between those three factors but it’s begging the question of whether we’re seeing those fundamental limits versus something else. For example, I’ve seen multiple teams deliver apps using React frameworks which have no advantages in any of those points - they’re notably slower than SSR, use more resources on both the clients and servers, and don’t ship noticeably faster because while some functions are easier that’s canceled out by spending enough time on toolchain toil to make a J2EE developer weep.

That suggests that while those three factors are part of the explanation they’re not sufficient to explain it alone. My theory would be that we’re imbalanced by the massive ad-tech industry and many companies are optimizing for ad revenue and/or data collection over other factors such as user satisfaction, especially with the effects of consolidation reducing market corrective pressure.



Great point! I've always heard "There's good, fast, and cheap. Pick Two."


>I can see the downvotes but no idea why. Would you care to explain?

I dunno, but this has been taught in PM since PM. I remember learning it in the 90s and it still holds true. If you want it fast and a large scope, you have to throw bodies and planning at it, costing money. If you want it cheap with a big scope, you have to wait for that small team to finish it (time). If you want it fast and cheap, you have to limit your scope.

Time/Cost/Scope, pick 2. Perhaps people are taking issue with quality vs scope, but quality is a part of scope for certain.

https://www.projectmanager.com/blog/triple-constraint-projec...



This is why I'm an embedded programmer on small MCUs. Give me C99 and a datasheet and I'll give you the world in 64kb.

Though times are changing in that world too. Sometimes you have to use a library. And more and more those libraries require an RTOS. Just about to make the plunge into Zephyr so I can use the current Nordic BLE SDK.

Having hard limits to RAM and flash is a great way to prevent bloat. Management is happy to let engineers grind for a month reducing code size of it means the code will fit into a cheaper MCU with less flash and save a 10¢ on the BOM. Pure software has no such incentive to minimize resources because the user buys the HW separately. If anything, some SW companies have an incentive to add bloat if they're the same company that sells you a new phone when your old one becomes too slow.



Still, even when you need an RTOS, they're generally small and completely knowable.


The reference CHIP implementation for matter devices is pretty large. Several MB worth of stuff once you strip.


Why wouldn't, say, iOS be subject to the same incentives?


I have little to add to this before comments start going below the fold. Other than to say that I had this realization in the late 1990s as 1 GHz processors were coming online and software was still as slow as ever. Today we have eye candy, but tech has mostly abandoned its charter of bringing innovation that increases income while reducing workload. Like phantom wealth that fixates on digits in a bank account instead of building income streams, today we have phantom tech that focuses on profits instead of innovation which improves the human condition.

We used to have a social contract between industry, academia and society. Company invents widget, it pays into a university's endowment, student goes on to start the next company.

Today that's all gone. Now company invents widget, billionaire keeps the money, student gets forgotten as university is defunded and discredited through various forms of regulatory capture. Often to thunderous applause.

The stagnation of tech and the subjugation of the best and brightest under the yoke go hand in hand. Your disempowerment is a reflection of how far society has fallen. Loosely that means that even though we know how to make programming better, we will never get the opportunity to do so, because we'll likely spend the rest of our lives making rent. Which is the central battle that humanity has faced for 10,000 years. Like the tragedy of the commons, we work so hard as individuals that we fail at systems-level solutions.

Programming won't get fixed until we stop idolizing the rich and powerful, and get back to the real work of doing whatever it takes to get to, say, UBI.



I don’t think it’s a coincidence that so many of their examples were Google products. If they use the Apple Mail client on the same device, email opens in hundreds of milliseconds (measured from depressing the key to finished rendering a complex HTML message just now).

This isn’t to say Apple doesn’t have their own problems but I think it’s not an accident given that Google’s focus is on showing ads while Apple’s is maximizing device value, minimizing energy usage, etc.

The web is frustrating because it’s so easy to hit high frame rates at low power usage in a modern browser, but everyone internalized that “move fast” BS focused entirely on developer experience and half the industry consolidated on a slow framework from the IE6 era rather than learning web standards. It makes me wish browsers had an energy-usage gauge so you’d have to own that decision publicly similar to a fast food place having to list calories in the menu.



It's worth noting here that part of the problem is that the web is ideologically hostile to anything native.

Consider: people complain a lot about Electron app bloat. Why can't Slack optimize a bit? Well, they have a lot of Mac users. One obvious way to optimize would be to incrementally port the most performance sensitive parts of their app to use AppKit so Apple's highly hardware-optimized UI toolkit handles those parts, and Electron handles the rest.

Problem: doesn't work. You can't embed an NSView into a Chrome renderer. This was once possible using the Netscape plugin API, but it was removed many moons ago and now you have to use HTML for everything. Electron is popular, plugins were popular, and Chrome could add these capabilities back and even do them better, but they don't because if your app is pure HTML then this maximally empowers the Chrome team. They can do lots of stuff to your app, and add value in various ways, and if the price of this is less efficiency or that some features become less consistent or even harder to implement then this is a sacrifice they are willing to make.

The result is that a lot of these discussions go circular and just become moanfests, because the ideological constraints of the platforms are taken as unarticulated givens: immovable objects that are practically laws of nature rather than things that can be changed.

There is no specific technical reason why you can't have apps that start out as web apps but incrementally become native Mac or Windows apps as user demand justifies it, it's just not how we do things around here.



There’s definitely a lot of friction there but I think it also hits the cost shifting aspect pretty hard: it wouldn’t be that hard for, say, Slack to use a native Mac app which embeds WebKit but they want to save money by only supporting Electron. For a small startup that makes sense but given the collective petabytes of RAM and power used, it feels like the balance should have shifted at some point.


I don't think WebKit can embed NSViews either? At least not unless you modify the source code, iirc there are still some bits of the old Netscape plugin code still there.

The hard part isn't native embedding web, it's that if you start with web you then can't easily stick a native view into e.g. an iframe.



“Move fast” is likely the correct thing to do if you’re goal is to get to market quickly.

I think software companies should just be more comfortable with rewriting code.

Sometimes you have to do it because your requirements changed or assumptions proven incorrect.

Move fast and rewrite things.



Yes - I don’t want to condemn it entirely but it can’t be the only way to work.


It is incredibly frustrating to me how slow gmail can be to load. And I have no idea what I'm gaining with that time, as far as features go.


This is why we must question the term "engineering" in the title "software engineering". Most engineering disciplines are concerned about optimization and correctness on orders of magnitude more, compared to software discipline. Software is perhaps better seen as mass-market movies or music. Most software is less concerned about hard reality but is constantly struggling to keep up with the intangibles of the human mind and psychology. Put it another way, software addresses subtler aspects of reality (human mind, psychology, etc), rather than the hard realities of the world. And the human mind is mostly a black box, and quite dynamic and random. As the fancies of the market shift, software shapes itself to satisfy it.

In physical engineering, if a mistake is made, the bridge collapses, lives are lost, and therefore there is a deterrent against mistakes. But in software/movies, nobody cares if there are 10 flop movies/software, as long as one works/pays off.



> Your desktop todo app is probably written in Electron and thus has a userland driver for the Xbox 360 controller in it, can render 3D graphics and play audio and take photos with your web camera.

Can Electron not "tree-shake" parts of the browser engine that are unused? Either by static analysis or manual configuration? Seems like a real missed opportunity to trim down on bloat...



The "build" step of an Electron app doesn't build Chromium, so this wouldn't be very feasible. Building Chromium requires an insane amount of computing power.

According to Google, a Chromium build on normal hardware takes 6+ hours.

And alas, even if it was feasible to custom build based on what you need, it would have to be done via configuration--since there's no way to know at compile time which language features will be used, since your app could (and probably does) include remote scripts.



> 6+ hours

Yeah that sounds about right - I use a chromium on Android fork and according to the lead developer, it takes about 3 hours for a release to compile and that is after optimizing the process as much as possible.



Nope, it doesn’t do this. A while back I looked into doing a “hello world” version of that - trying to remove the Print feature. To do this, I had to build Chromium, which did actually take like 5 or 6 hours on an M1 MacBook.

From what I could tell of the config files, Chromium is pretty modular. It looks like you could just delete a few lines and have it avoid compiling entire subsystems. But I didn’t ultimately achieve my goal because I couldn’t get it to compile with those changes. IIRC it hit a linker error and I couldn’t figure out what to prune next, and I wanted to get back to actually building my product. (I ended up switching to Tauri anyway)

Part of me wants to revisit that project though. It would be so great if there were custom minimal builds of Chromium for Electron apps.



No, it can't. Chrome is deliberately designed to be entirely non-modular. The codebase itself is somewhat modularized in the sense that related functionality is grouped into different directories and build targets, but the Chrome team have no interest in allowing it to be cut down to just the needed functionality.

The main reason for this is politics. The Chrome team is ideologically wedded to the idea that everything should be a web app running on Chrome. They see desktop and mobile apps as the "enemy" to be wiped out by making the web do more and more stuff, until Chrome is the universal OS. Classical ChromeOS is the pinnacle of this vision - a computer that only runs a web browser and nothing else.

Chrome's architecture reflects this, um, purity of vision. It is not reusable in any way and the Chrome team do not care to make that easier. Projects that make it embeddable or reusable are all third party forks that have to maintain expensive patch sets: CEF, Electron, etc, all pay high maintenance costs because they have to extensively patch the Chrome codebase to make it usable in non-Chrome apps. The patches are often not accepted upstream.

This problem also affects V8. Several projects that were originally using V8 are trying to migrate to JavaScriptCore or other JS engines because V8 doesn't have a stable API and building it is extremely painful. It's a subsystem of Chrome, so you have to build Chrome to get V8.

This is a pity. There's a ton of great code in the Chrome codebase, and it can be built in a modular way (with millions of small DLLs). It's slower to start up when you do that due to the dynamic linking overhead, but it does work. Unfortunately for as long as the Chrome guys see native apps as a bug to be fixed and not a reality to be embraced, we will continue to have dozens of apps on our laptops which statically link an Xbox 360 gamepad controller.

There are a few possible solutions for this.

One is to not write Electron apps. Java went through a modularization process in version 9 and since then you can bundle much smaller subsets of the platform with your app. I guess other platforms have something similar. Obviously, native apps also don't have this issue both because the platform comes with the hardware you buy and because they tend to be more modular to begin with. But, people like writing Electron apps because it gives you the benefits of web development without many of the downsides (like the ultra-aggressive sandboxing). The nature of the web platform is that it always lags what the hardware can actually do by many years due to the huge costs of sandboxing everything so thoroughly, whereas Electron apps can just call into native modules and do whatever they need, but you can still use your existing HTML/JS skills.

Another would be for an alternative to Electron to arise. There are experiments in using system WebViews for this. That doesn't make the web platform more modular, but at least means it's a single install is being reused. You could also imagine a fork of the web platform designed for modularity, for example, in which renderer features are compiled out if you don't need them or even bringing back renderer plugins.

Another is to just tackle the issue from an entirely different angle, for example, by opportunistically reusing and merging Electron files on disk and in memory between different apps. If you ship your Electron app to Windows users using Conveyor [1] (or in the Windows Store using MSIX) then this actually happens. Windows will reuse disk blocks during the install rather than download them, and if two apps are using the same build of Electron the files will be hard-linked together so only one copy will be in memory at once at runtime.

But fundamentally the issue here is one of philosophy. Chrome wants to rule the world and has a budget to match, yet their approach to platform design just does not scale. For as long as that is the case there will be lots of complaining about bloat.

[1] https://hydraulic.dev/ (disclosure: my company)



>The Chrome team is ideologically wedded to the idea that everything should be a web app

I'm not surprised, and doubt the Firefox team is any different in that regard.

So far at least the Chrome team hasn't removed from Chrome the ability to go to a new web page in response to code external to Chrome, so we have that to be thankful for at least. Yay?

(The desktop code I maintain achieves that effect by invoking /opt/google/chrome/google-chrome with the desired URL as an argument.)



The comment I wrote at a higher level in this same thread is, ultimately, a criticism of NOT using a statically compiled, statically linked language (because a form of tree shaking like original commenter suggested is already part of the linking step there, except for DLLs).

And yet, full disclosure and admitting to cognitive dissonance, for a (hobbyist) C++ game engine I'm currently working on that targets Emscripten for a web build and native for Debug build, I'm considering not even having a native-Release build at all.

The idea being if it's web-first and the native build being only for developer use for debugging, I could do things like supporting only one desktop graphics API (e.g. just DirectX) and optimizing the native graphics pipeline for simplicity over performance. End users would/could just use the web version.

Granted this is a bit different because I wouldn't be distributing a browser too a la Electron; it would just use the browser the end user already has. Just thought it's interesting that it's easy for me to criticize others, but with the choice of how to spend my own limited developer time (my free time) it's looking like this way of doing it makes the most sense.



I kind of at least partly agree with Chrome. My problem with it is that web apps are missing a lot of features you would want in a non-cloud environment for offline use.

But if something is proprietary and cloud linked anyway I would much rather it all go through the web. That way open platforms still have a chance.

If banking apps and and Google payments and proprietary IoT devices controllers were all on the web, then a Linux phone might actually be viable!



How would it be a "Linux phone" if the only software you could run was web apps? That which defines an OS is its APIs and unique capabilities, anything can act as a bootloader for Chrome.


Unless they actually convince Linux to block non web apps, you could run whatever you want on a FOSS platform, you'd just be able to run proprietary web stuff in addition to native free software.

I'm guessing eventually the native free software would move more and more into the browser too, but that's fine as long as you can still run the stuff that hasn't been moved yet or isn't interested in moving.



This is an important distinction between statically compiled and interpreted languages which is often lost in discussions that focus on developer usability.

For many years I worked on my own game engine which combined a C++ core with an interpreted scripting language. I had developed a system of language bindings which allowed the interpreted language portion to call functions from C++. The engine quickly grew to a multiple-gigabyte executable (in the debug build), and no matter how much I tried to optimize the size it was still unconscionably huge.

One of the reasons I eventually gave up on the project was I realized I was overlooking a simple mathematical truth. The size was NxM, where N is the number of bindings and M the size of each binding. I was focusing on optimizing M, the size each binding added to the executable, while not just ignoring N but actually increasing it every time I added bindings for a new library I wanted to call from the game engine.

There were diminishing returns to how much I could improve M because I was relying on compiler implementations of certain things I was doing (and I was using then-new next generation C++ features that weren't well optimized); it would be a lot easier to simply reduce N. And the easiest way to do that would be some sort of tree shaking.

Unfortunately due to the nature of interpreted code it isn't known at compile time which things will/will not be ultimately called. That determination is a runtime thing, by calls via function pointer, by interpretation of scripts that start out as strings at compile time (or even strings entered by the user during runtime).

From a compile time perspective, static usage of every bound function, feature or library already exists - it is the C++ side of the cross-language binding. That's enough to convince the linker to keep it and not discard it.

In fact, the mere presence of the bindings caused my game executable to grow more per each included library than would a similar C++ -only, all-statically linked program. If a library provided 5 overloads of a function to do a similar thing with different arguments, an all- C or C++ application that uses only one of them would need only include that version in the compiled executable; the others would be stripped out during the linking step.

Since I don't necessarily know ahead of time which overload(s) I'm going to end up using from the interpreted language side of the engine, I would end up binding all 5. Then my executable grew simply from adding the capability to use the library, whether or not I make use of it, but moreover if I did use it my executable grew even more than an equivalent C/C++ - only user of the library because I also incur costs for all the unused overloads.

You can see why something like Electron would have the same problem. Unused functions can't be automatically stripped out because that information isn't known at compile time. To do it by static analysis the developer of the Electron app would have to re-run the entire build from source process of the Electron executable to combine that with static analysis of the app's Javascript to inform the linker what can be stripped out of the final executable.

And it bears mentioning neither such a static analysis tool for Electron app Javascript nor the compiler/linker integrations for it currently exist. In theory they could exist but would still have trouble with things like eval'd code.

Manual configuration would be possible but necessarily either coarse-grained or too tedious to expect most developers (of Electron itself or users of Electron) to go into that much detail. That is, you may have manual configuration to include or not include Xbox 360 controller, but probably not for "only uses the motion controls" while not including other controller features.

Either way you wouldn't be able to add-back support for it Javascript written after build time turned out to actually need the function or feature after all, unless you distributed a new executable. If you're building so much from source with configuration and static analysis, at that point why not write your whole application in a statically compiled language in the first place?

My thesis here is not that we should accept things like Electron being bloated because they cannot be any other way. My point is (as happens time and again in Computer Science) we had certain things already (like tree shaking and unused symbol stripping during the linking stage of statically compiled languages) and then in the name of "progress" let them either be Jedi-mind-tricked away or the people developing the new thing didn't understand what was being left behind.



I'm not familiar with Chrome's architecture enough to say, but I would be surprised if all these capabilities get paged in and initialized. There definitely has to be some work to setup the web platform bindings, to let JS think this stuff is here ready to go, but I hope it's backed by late bound code.

And in the web browser, I believe v8 is using snapshots of the js runtime to avoid much of the work of initializing the js side of things: it just forks a new copy of itself.

This is one of the prime strengths of alternatives like Tauri, that use a shared library model rather than having a static library. With Electron you have a ton of initialization to do for a very excellent runtime, but then you never get to reap that existing work again. Where-as on the web, we open new pages and tabs all the time, and avoid the slow first load & get to enjoy the very fast latter instance loads. That multi-machine capable vm pays off! With Tauri, the shared library may well already be in memory and initialized. Only the first consumer of the shared library has to pay the price.



> Modern cars work, let’s say for the sake of argument, at 98% of what’s physically possible with the current engine design

Yeah, but I'll bet those car designers didn't close the requisite number of story points in their sprint! So really, who's laughing now?



Don't worry about the cars, we've got them covered. Just add some software and suddenly they stop working as well.


They're mostly independent components, controlled by software, talking over a shared bus to other components. It's already bad and has been for some time.


There is a remake of the classic puzzle game Supaplex. It weighs 200MB. The original is less than 300kB (https://www.dosgamesarchive.com/download/supaplex).

So, I play the original one in DOSBox-x. Maybe it is nostalgia, but I love vibrant pixels and the Sound Blaster music.



> I have a Python program I run every day, it takes 1.5 seconds. I spent six hours re-writing it in rust, now it takes 0.06 seconds. That efficiency improvement means I'll make my time back in 41 years, 24 days :-)

The key here is "I run every day" - if you're only saving CPU for yourself once a day then, well, fine. But if the improvement is about something that runs millions of times a day, or is run once a day by a million people, then that's something completely different!



I'm so tired of these rants. They're always very unoriginal and cover the same points of waaaah software is slow, software is bloated. And they never offer solutions or analysis beyond "(almost) everything sucks".

Y'know why software sucks? People. People are hard to manage, hard to incentivize, hard to compensate. Focusing on the slow loading web page is neglecting the economic incentives to bloat the bundle with trackers, to ignore perf work in favor of features, to not compensate open source developers.

The point about engineers in other domains doing it better, well I'm not exactly convinced about that (look at the bloated cost of building in America). But taken as true, they're doing better because there are economic, legal, and social incentives to do better. Not because they're just better at engineering.



I think there's also some big conceptual distinctions between making software and the other engineering practices it gets compared to in these articles.

If you're making a building or a bridge, it may have some aesthetic or small functional qualities that make it unique, but 99% of the job is to make it with the same success criteria as every other building or bridge. The building needs to stand up, hold people, have X amount of floors, and not fall over. A bridge has to get people/vehicles over it and not crumble.

Pretty much every piece of software is expected to do something new and innovative. It starts off with an abstract idea and details are filling in as you go. You stumble into some roadblock because of some design conflict that wasn't obvious until you were implementing. If the software is being made at the request of a client or your company, they probably gave a bunch of success criteria that has a bunch of inherently contradictory ideas that they'll need to compromise on because users don't actually know what they want until they're using it. You finish off the first version, they identify all the things that aren't working, and now there's new success criteria you need to implement built off a foundation that wasn't prepared for it. No architect ever finished erecting a skyscraper only to be told it needs a major change that will involve reworking all the plumbing.

That's the inherent difference. Software is a game of constantly trying to chase a moving target, and most of it is being built on a stack that is trying to chase moving targets at every level.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



Search:
联系我们 contact @ memedata.com