(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=38792446

本文讨论了对 Vim 和 Emacs 等传统 Unix 工具与 IntelliJ、Visual Studio 和 VSCode 等流行 IDE 的偏好,并强调了复杂性、缺乏社区和架构以及大规模分布式系统的局限性等问题。 虽然许多人因为 Vim 的效率、可扩展性和简单性而喜欢它,但它需要大量配置才能实现高级使用。 作者建议投资于工具,而不是花费过多的时间配置 Vim,强调重点应该放在高效且有效地执行任务上,以确保成功。 此外,文章还指出,许多现代 IDE 提供了卓越的集成度和生产力,为开发人员在切换到它们时提供了直接的优势。 最终,虽然 Vim 对于经验丰富的用户来说仍然是一个可行的选择,但较新的 IDE 由于其易用性、对最新语言的支持以及更全面的工具包而在开发人员中广受欢迎。

相关文章

原文
Hacker News new | past | comments | ask | show | jobs | submit login
IDEs we had 30 years ago (blogsystem5.substack.com)
524 points by titaniumtown 20 hours ago | hide | past | favorite | 443 comments










This article completely ignores the Macintosh and the greatest IDE ever, which came out in in the mid-80s: Coral Common Lisp. It ran on a Mac Plus with 1MB of RAM and it was fucking awesome. It included a drag-and-drop interface builder that let you build a complete app in literally a matter of minutes. Nothing I've seen has come close since.


IMO the real loss in IDE tech is the speed that visual basic 6 gave you to make desktop guis.

Web and mobile development (which I have done all 3) are significantly slower than what you could do with VB6. It's really strange why we haven't gotten back to it.



VB6 was fantastic. The language was terrible, but the IDE was the best way to make GUIs that ever existed. The Windows 98 era was excellent for UX as well, with every element on the screen having an accelerator key, consistent menu guidelines, consistent visuals, keyboard shortcuts, etc.

It was just brilliant.



VB6 worked because the environment was simple. GUIs are simple when you don't need to do any styling, do not require any special modifications and most importantly you don't need responsiveness. All of that, and you need a special environment and operating system to run the GUI. Web front-end are a completely different game in that regard.


Yeah. Back in the day, every application was expected to use the same common set of controls - which were well written by the OS vendor, well tested, well documented and well understood by users. Every button on windows 95 looks and behaves the same way. Every application had application menus that worked the same way, in the same font, and they all responded to the same keyboard shortcuts.

These days every application reinvents its own controls for everything. For example, in a web browser the tab bar and address bars are totally custom controls. Electron apps like vscode take this to the extreme - I don't think vscode or spotify use any native controls in the entire UI.

I blame the web in part. It was never designed as a system to build application UIs, and as such the provided UI primitives are primitive and rubbish. Developers got used to building our own custom elements for everything - which we build fresh for every client and style in alignment with the brand. UIs built on the web are inconsistent - so our users never get a chance to learn a common set of primitives.

And for some reason, OS vendors have totally dropped the ball on this stuff. Windows has several competing "official" UI libraries. Every library has a different look and feel, and all are in various stages of decomposition. Microsoft's own software teams seem as lost as everyone else when it comes to navigating the morass of options - if windows 11 and MS Teams are anything to go by. Macos isn't quite as bad, but its still a bit of a mess. Every year I expect the people building xcode to know how to build and debug software on macos. And then xcode crashes for the 3rd time in a week. Xcode achieves the impossible of making the javascript ecosystem seem sane and attractive.

I'd love a return to the halcyon days of vb6, where UIs were simple and consistent. Where there was a human interface guidelines document that UI developers were expected to read. I want users and developers alike to know how the platform works and what the rules and conventions are. F it. How hard can that really be to build?



I think you're forgetting how much more dense and complex even a basic web ui is in controls. Every HN comment in the default story view has something like up to 10 clickable controls - a pair of vote buttons, username, timestamp, a bunch of nav links, a hide control, a reply button. HN, the famously minimalist, oldskool website. The web made new ui and even the back-in-the-day version of that ui is way more complicated than the back-in-the-day desktop app UI you're thinking of.


Hm. I don't think it would be that hard it would be to remake HN's UI using an old school UI component library. The up/down arrows could use the up/down buttons on this screenshot from macos's old color picker:

https://guidebookgallery.org/pics/gui/interface/dialogs/colo...

... But limited to only allow you up/downvote between -1 and 1.

Everything else could be done with buttons and a TextField / Label for comments and replying.

The web is a bit weird in that it taught us to build every UI as a giant scrolling document. And HN is no different. A more "classic" UI approach would be something like Thunderbird mail - with 2 panes: a "comments" tree on top and a "message body" down the bottom. That would be harder to read (since you'd need to click on the next message to jump to it). But it might encourage longer, more thoughtful replies.

Thunderbird: https://lwn.net/Articles/91536/

Or you could reimplement HN with classic controls and something like TB 114's UI:

https://www.ghacks.net/wp-content/uploads/2022/08/account_ma...

Probably still worse that what HN is now though.



> I think you're forgetting how much more dense and complex even a basic web ui is in controls.

It really isn't. Web is anemic in controls and layouts compared to what's actually possible with proper controls and control over those controls.



>and most importantly you don't need responsiveness

I'm gonna ask a dumb question out of ignorance because I know responsiveness is all the rage, but... what do we gain from it? Would it not be more straightforward to build UIs from the ground up for desktop and mobile targets than make one UI to morph to fit both?



There are many different screen resolutions. Being able to adjust the application based on available space makes the application usable to more people.


I think Delphi was slightly better. The components in Delphi were more powerful at the time. But the general idea is the same.

C# with WinForms is still usable today and provides a similar experience. Although the design language has fallen out of fashion.



I agree on all points. In Delphi designers and properties felt more logical and consistent.

For me, WinForms always had an element of struggle not present in Delphi



The GUI WinForms editor in Visual Studio 2022 is a direct descendant of the one in VB6 and has all the exact same functions that I remember from VB6.


was the language even that bad?


It was a running joke about how bad it was. It was a bit before my time although, but even when I saw people use it as a kid and playing with it myself, it was pretty obvious how fast it was in making things.


IIRC, there were no user defined types. You had to write the whole app using primitives.


You can create classes in VB 6 from IDE where you would add a class module and define code in that module. There was no "class" keyword AFAIK.


There was definitely a `Type` keyword that was similar to a struct.


VB6 was the only “low code / no code” tool that actually delivered on its promise.

Bill Gates demoing it from 32-years ago.

https://youtu.be/Fh_UDQnboRw?feature=shared



Delphi was even better. In fact, having used both, I hated vb6.


Microsoft Access was way ahead, in my opinion. Especially that it came with a ready database, reports, etc.. and with VB6 you could practically do anything you want with the operating system. It's interesting we have thrown it all for something much more inferior.


I spent a long time doing VB6 and Windows.Forms, the idea that it was meaningfully better than NeXT or Delphi is just wrong.


Yes, Delphi too. So also tools like Powerbuilder for developing database-heavy apps. There is nothing even remotely close to those tools now.


You're totally wrong.

The IDEs I was making in VB6 in the 90s I'm making about twice as quick in Visual Studio 2022 in C# with WinForms.

In fact, quicker because I'm getting GPT to write any gnarly code I need -- like I just asked it if it was possible to add a tooltip to a ListBox and it churned out exactly the code I need, something I would have spent a bunch of time figuring out on my own.



In Delphi u didn’t need any code to add a tooltip to a component - just add it as a property in the inspector. U can do it in code if you want, but it’s easy to do since u see the available properties in the inspector.


Never really got why Microsoft didn't keep going with that kind of thing. Why don't we have .net properly inside excel instead of VBA?

Money? Internal elitism?



Office Developer Tools exist.

They're trying to steer people away from end-user programming towards using more well-defined, tested, features. I find it patronizing, but, having seen some VBA, understand the reasoning.



Or even better one: Delphi (if you were into pascal)


I’ve often heard people mention that Delphi was a superior RAD GUI experience than Visual Basic, but as someone who’s never used it, what is it that made it so great compared to VB or other GUI builder type tools (eg Qt Designer)?


The pascal language requires things to be declared in a certain order. It's a bit awkward some of the time bit it enables the compiler to work in a single pass. This meant that running an application was extremely fast by any standards and this really made it stand out compared to other development tools out there.

VB created applications that had to ship with a shared runtime library. Windows wasn't great at versioning these libraries so developers often shipped their own VB runtime with their executable. The executable was small and the runtime was comparatively huge which had a negative impact on user perception when downloading the installers.

Before moving on to Microsoft in 1996, Anders Heilsberg was the Chief Engineer at Borland that oversaw the development and release of Delphi 1.0.

For years, VB felt like an application that could make deployable versions of itself. Delphi felt like a programming environment that compiled code into applications.

After Heilsberg moved to MS, a lot of improvements were made in VB that utlimately made Delphi less attractive, especially during Borland's strategic waffling known as "The Inprise Years": https://en.wikipedia.org/wiki/Borland#Inprise_Corporation_Er...

If you want to get a feel for what it was like then check out the FOSS clone "Lazarus".



> After Heilsberg moved to MS, a lot of improvements were made in VB that utlimately made Delphi less attractive

Well, not actually. With Anders' move to Microsoft, VB6 (aka, VB "classic") was discontinued. Microsoft supported Visual Basic syntax on the .NET runtime, but the vast majority of VB programmers considered this to be a different language because developing for the .NET Framework (remember this is ~2001) was a huge departure from VB Classic.

Many VB developers petitioned Microsoft to open-source VB6 or continue releasing improvements on it. Microsoft did not and chose to continue with their .NET + C# strategy.



VB6 was my last VB. It came at just the time when getting a computer onto a network to download and install a giant distribution was touch and go. The building where I worked didn't have networking in the labs, the "labs" were whatever space we could find, and the "lab computers" were old cast-offs retired from office use. It meant that supporting VB.NET with a brand spankin new networked computer wasn't a safe assumption.

At the same time, I was playing with Linux at home, and wanted tooling that could run on either platform. I learned Python at home, and then made the switch at work.

One of my last VB6 apps, thousands of lines, has been running in the plant without issue for 15 years. On one occasion I had to bump up the declared sizes of some fixed length arrays.

As for GUIs, I never found anything close to VB, but also decided to just write a thin wrapper around Tkinter and let my layout be generated automatically by default. I haven't missed laying out my GUIs, which were always a hodgepodge anyway.



There's a bit of a gap there though between 1996 and Visual Basic classic being discontinued. VB.NET came out in 2002 but VB6 was supported until 2008.

VB5 in 1997 and VB6 in 1998 really closed the gap with Delphi from what I remember.



> After Heilsberg moved to MS, a lot of improvements were made in VB that utlimately made Delphi less attractive, especially during Borland's strategic waffling known as "The Inprise Years": https://en.wikipedia.org/wiki/Borland#Inprise_Corporation_Er...

This is (from Borland's telling) not an accident. https://news.ycombinator.com/item?id=29513242

Hard to not waffle when your major competitor hires away your top talent.



Having worked with both Delphi and Visual Basic, I've found that Delphi had the edge, especially for professional apps. Its use of Object Pascal meant you got compiled, efficient code right out of the box, and it didn't need an extra runtime like VB.

The VCL component library in Delphi is still unmatched in native code – very comprehensive built-in (and commercial) components, and customizable. Plus, Delphi's database connectivity was unparalleled and can be setup in the designer where you could see data queried and returned live in your grid component!

Delphi also supported advanced language capabilities (when compared to VB), like inline assembly and pointers, which were essential for low-level optimization or system hacking. The robust error handling in Delphi was another plus compared to VB's older 'On Error' style.



Few things that made Delphi great: for one thing you could develop all sorts of apps in Delphi - from system management apps, to database heavy apps, to editors, to games etc.

1. The component architecture was just great - I have yet to see any language/platform with a more sophisticated component structure even now.

2. There were all sorts of free and commercial components that you could download and install into the IDE and have it running in your application in next to no time. ActiveX etc didn't even come close to the level of integration and speed that the Delphi component architecture provided.

3. Components could be loaded into the UI during the design time itself and you could see it functioning directly within the IDE when you place it on your form. Example: You could connect to a database, run a query, put that into a table with all sorts of sorting and navigation functions and you could see this working even without compiling the 3pplication.

4. It was trivial to develop components for Delphi - even though it was so sophisticated. You could develop one in about 15-30 minutes.

5. The compilation speed was just insane. It just blew everything else out of the water and so you could do fast compile+run cycles.



I wrote a neural net ML app in the 90s using Delphi.

VB was easier to start but Delphi was nearly as fast to build, code was compiled (fast!), Object Pascal supported OOP and the UI features like visual form inheritance I have yet to see implemented anywhere else.



I would really expect there to be some FOSS Python or JS insta-app-maker that's as easy as VB was, but for some reason nobody wants to work on such things.


Have a look at https://anvil.works -it's a drag and drop web app creator using python for front and backend. It's as close to Delphi for the modern age as I've come across.


i am working on such a thing myself at https://github.com/yazz/yazz. Also there are many other people trying to build something similar


There is Gambas: https://en.m.wikipedia.org/wiki/Gambas

It's more that few people want to make and distribute small desktop apps anymore.



That's 20 years old. People like native apps, it's just that GUI programming is generally such a pain that many people avoid it.


Didn't Microsoft more or less just copy everything from Borland Delphi? They had drag and drop GUI stuff in the early 90s.


Windows Forms is pretty much the successor to VB6 and I think is still supported. Still nothing like it on Linux though, even though people say Linux is way better than Windows for software development :/


The subset of GUI software is much much smaller than the greater whole.

I do agree though that any GUI programming on Linux is a pain compared to Linux / Mac. However the proliferation of web based apps, even those running in electron shows how valuable a truely cross platform GUI framework that is easy to use would be.

Google is trying with flutter and dart but last time I used it I felt it was still being iterated on far too quickly, maybe a bit ( maybe even now ) it will be more friendly to use.



what aspect of the development speed do you feel is faster? i feel like i can write things like

    
faster in html than i could in vb6. maybe i'm wrong about that?

you can try it in your url bar: data:text/html,

of course that doesn't give you database integration, but if you just want crud, you can get crud pretty quick out of django's admin

here's a couple of things i've hacked up recently in dhtml which i think wouldn't have been easier in vb6

http://canonical.org/~kragen/sw/dev3/ifs a 2-d iterated function system editor

http://canonical.org/~kragen/sw/dev3/clock a watchlighting clock



Now try to make a layout with that form example that you could free form put anywhere in 2d space and have it flex properly as the window size changes beyond the defaults that html gives you and make the generic B2B SAAS dashboard like Segment, RevenueCat, Mixpanel, Datadog, Sentry, etc. I bet you could make a VB6 / Pascal equivalent much faster than you would be able to with a mobile or web app, especially if they were updated with a decent graph widget set.

Also your two examples are drawing canvas examples, thats a pretty different target that delphi / vb6 have with their GUI toolkits.



FYI, since 2017-ish doing layouts in HTML is much easier if you use "display: grid". Just be aware that the numbering is based on lines, not boxes. Also be aware that to use percentage-based heights at top level, you have to style the `html` and `body` too.

Additionally, use of `@container` queries (the modern, poorly-documented alternative to `@media` queries) lets you do more advanced layout changes on element resize. This requires adding `container-type: size` style on the parent (I found this very confusing in the docs).



i'm struggling under the misconception that having things flex properly as the window sizes is the default in html and basically impossible in vb6. but i'm very open to having my misconceptions corrected. is there a public video that demonstrates what it looks like when an expert uses vb6, so i can see what things that are hard in html are easy in vb6?

i have no idea what segment, revenuecat, mixpanel, datadog, sentry, etc., are



Flash and Actionscript did this for the web but then Apple killed Flash.

Maybe in the era of LLM-assisted webdev tools (like teleporthq.io or MS Sketch2Code etc) the LLM will help sidestep API moats and help bring back such low-code IDEs. Or it could backfire and bring about API moats with non-human-accessible APIs (obfuscated binary only).



Apple did not kill Flash. Flash killed Flash.

Adobe claimed that they could get Flash running on the first iPhone if Apple let them.

When Flash finally did come to mobile via Android in 2010, it required 1GB of RAM and a 1Ghz CPU and it still ran badly.

The first iPhone had a 400Mhz CPU and 128MB of RAM. It could barely run Safari. Were you around when the only way that Safari could keep up with your scrolling was by having a checkerboard pattern while waiting on the screen to be redrawn?



VBA is still used all the time. As long as you have Excel or Outlook you can use it.


Would this be similar to the .NET Webforms? That could be done in C# or VB.


The equivalent would be WinForms


I think by overall value it pales in comparison to Delphi. At leas this is my experience.


I loved TurboPascal. I agree with everything the post argues.

I would like to expand a bit on it being before the Internet was a huge thing.

The manuals that came with TurboPascal were nearly excellent. It included most of what you would need to get started. When you didn't quite understand something, you had to spend time to figure it out and how to do it well. This could be time consuming, but afterwards you gained important knowledge.

Then there were other books to get, there were "coding" magazines, though at the moment I cant remember any TurboPascal specific ones. and if you were lucky you knew one or two other people who were into coding and you could share problems, solutions, hints and tips. and warez.

There were also a lot of BBSs out there. Where you could ask questions, help others, etc.

These days most people if they face a problem, Google it (maybe now ChatGPT it) find someone posted a solution, cut and paste the crappy code, crosses fingers that it works and off you go.

(or pull down libraries without having any idea what in the world it actually does)

At the same time things have gotten a lot more complex . In my TurboPascal days I knew most of the stack. The programming language, the operating system, assembler, a lot of how the CPU worked.

These days understanding javascript, understanding the runtime / compiler etc, before you even get close to the underlying OS, and certainly not down into assembler amd CPU



I'm not sure whether the issue is stack complexity / depth, but there is definitely something in the culture where it's common for docs and how-tos and Q/A to tell you what to do, the steps/commands/etc, but to do little to help you build any kind of relevant domain model.

This isn't exactly new, there's always been documentation like this, but I think the proportions have changed dramatically since even the early 2000s.

One of my vaguely defined theories is that as valuable as automation is, the culture of automation / process has started to diffuse and act as an influence on how we think, in steps & commands rather than models.

Possible that we've always been steps-results / input-output beings, but I wonder.



I recently created some documentation for a process at work where I explained the 'why' for most of the steps and commands. Not in great detail, but just a bit of detail. I thought that it was good as a step-by-step recipe, while also giving context that could help someone in the event that something didn't go according to plan.

I was asked to remove much of the context that I provided, so as not to confuse the reader, and to make it as direct as possible. This is documentation intended for experienced, technical professionals. I think that the revised documentation is less helpful.



Perhaps it could be restructured to separate out the howto from the explanation to serve the reader’s intended use at the time as described here: https://diataxis.fr


As a tech writer I love the concepts of Diataxis but don't agree with it being invoked here. Context is critical in all four of its quadrants, and its model doesn't apply uniformly to every aspect of every application.

GP did IMO the right thing by understanding the audience first in order to judge what level of context is appropriate. That should be rule 0 before anything in Diataxis gets involved.



The article covers some good points, but misses a few extra things that the Turbo Pascal 7.0 IDE included that made it a true powerhouse:

- A full OOP tree to determine parents/traits of descendant objects

- The ability to edit, assemble, and trace through both inline and external assembler code

- A registers window that showed all registers and flags at any stage of runtime

...all while able to run on the first 4.77 MHz 8088 IBM PC, which was long in the tooth by the time TP 7.0 came out. (The OOP tree required a 286 as they only added it to the protected mode IDE.) This made the TP 7.0 IDE a complete development and debugging environment for assembler as well as Pascal.



I never tested it on an XT, but it ran like a dream on my 286. I wouldn't be where I am now without Turbo C++/Turbo Assembler.


> ...all while able to run on the first 4.77 MHz 8088 IBM PC

Eh, more like it walked rather than ran :-P.



Twenty nine years ago, Metrowerks Code Warrior was released https://en.wikipedia.org/wiki/Metrowerks

I had the shirt ( https://www.rustyzipper.com/shop.cfm?viewpartnum=282427-M558... ) and wore it for many years... wish I knew where it was (if its still in one of my boxes somewhere).

The IDE was no where near as clunky as a text only DOS screen. https://www.macintoshrepository.org/577-codewarrior-pro-6



The alternative was MPW which was awful! Long Live Code Warrior! Its debugging was probably a decade ahead of its time.


The entire System 7 UI was really a thing of beauty.


> there are a few things that VSCode doesn’t give us.

> The first is that a TUI IDE is excellent for work on remote machines—even better than VSCode. You can SSH into any machine with ease and launch the IDE. Combine it with tmux and you get “full” multitasking.

I definitely disagree with this sentiment. At my last job, I had to do most of my work on a remote server (because it had a GPU), and I found VS Code far more pleasant to use than plain old SSH. People recommended using an editor on the server side or freaking around with Win SCP / Cyberduck, but VS Code was just so much better in so many ways.

Because of VS Code's web roots, it can easily run its frontend on your own local computer while running its backend somewhere else. This means that most common actions, like moving the cursor or selecting text, can be done locally, without the need for a round trip to the server. The operations that do have to be executed remotely, like saving a file for example, are properly asynchronous and don't interrupt your workflow. Everything is just far snappier, even if you're working from home, through a VPN, on barely working WiFi and an ADSL line.

As a bonus, you get fully native behavior and keyboard shortcuts that you'd expect from your platform. Things like text selection and copying just work, even some of your addons are carried over.



100% agree. Remote VSCode over SSH is great.

The resource consumption on the client doesn't bother me one bit. Any minimally decent laptop can put up with that load, on battery power, for hours.

I would agree with “whatever it takes to make the server install leaner, more portable, etc” just without sacrificing many features.

If the server side doesn't run on FreeBSD that's really too bad. If Microsoft makes it hard to improve by not making those bits open source, that's very unfortunate.



VS Code remote in some cases is better than local.

As the remote can be a docker container, so when I have to do some experiment, I create a container takes 5 min to setup. I than can play around, test dozen packages and configs, once I am comfortable commit last version.

If I want to do some quick testing on project by different team, again a local container is setup in 2-10 mins. Once done delete the container and my local system isn't messed up.

Last is obvious use case if you want to test anything on reasonable large data or GPUs. Create a cloud server, get data run your code, tests. Push data to S3 and done.



vscode's model of server on host is good because of low latency.

It can be a bit heavy in cpu usage depending on plugins though.

I like emacs tramp in theory since it doesn't impose that, but latency suffers.

With correct ssh config it usually works well, but many times I'd prefer lower latency with emacs being on the host.

That's supposedly possible, but I've never gotten it working.



What were you trying to do with tramp? I’ve used it for coding Common Lisp, together with a remote SLIME session - ie slime-connect - and while I have run into at least 1 limitation with paths, I have a decent enough work around for it. I think the setup was just a matter of setting some customizable variables.


I typically use tramp for:

- docker containers - accessing boxes on same network

Sometimes its fine, but then perhaps because of regressions, I get buffers that never seem to recover and have to be cleaned up.



I see. I thought I had some .emacs customized settings I could share, but they're all slime specific. It appears tramp otherwise just works without further configuration - unless I set them in ielm and forgot about them before copying them over to .emacs, but I didn't see anything like that in my ielm history.


I wish I could find a decent way to make VSCode work properly on Android.


I was doing exactly the same 30 years ago with X Windows and XEmacs.


> This means that most common actions, like moving the cursor or selecting text, can be done locally, without the need for a round trip to the server

No, you weren't doing this. You were making a round trip to the server when you moved the cursor or selected text.



> You were making a round trip to the server when you moved the cursor or selected text.

Of course this being X, your machine ran the server and the remotes were the clients…



No, as gummy well putted it, all of that was done on the client computer.


The fact that it is easy to confuse the server with the client in X, it does not change the fact that the XServer and XEmacs are running on different computers, so each interaction is a round-trip.


XServer and XEmacs are both running on the client machine.

Also it is impossible by laws of physics by using distributed computing, not having each keypress and its display on a rendering surface, being a two way street.



By the "client machine" where XServer and XEmacs are both running, do you mean the machine where the human user is entering keypresses and viewing windows? Or do you mean the machine where the files are ultimately getting edited? Clearly, there has to be something running on each of the machines, since otherwise one side would have nothing to connect to on the other side. What is running on the machine opposite the "client machine"?

The idea with VS Code is that neither the keypresses nor the displayed windows are being sent over the network, but are kept within the same machine where the user is entering or viewing them. Only the file data (or debugger status, etc.), which are cached and far less frequently updated, are sent over the network. Are you saying that XEmacs can also function remotely in this way, with neither keypresses nor displayed windows sent over the network?



There’s some confusion in some of the replies here. The point this person is trying to make is that you get the remote machine’s key bindings, not the local’s. That’s an artifact of the experience being a remote desktop.


You still had to do a roundtrip for every single click though, right? I don't think X Windows has any kind of client side scripting system.

That's better than SSH for sure, but still not as good as the web model.



X Windows server runs on the client machine.

The client is the server application.



The point still stands, though. You need a roundtrip, even if it starts from the X server rather than the X client.


You always need some level of round trip between keyboard and UNIX procecess.

The server application isn't guessing keys, regardless of the connection format.

What matters is how the communication is being compressed and local optimizations.



The difference here is that VisualStudio code fully runs the GUI on the local machine and only file IO or external programs (compiler, the actual program being devleoped, ...) run remotely. Thus the UI reacts promptly to all interactions and many of the remote interactions happen asynchronously, thus even saving a file will not block further actions.

Whereas any non trivial X application does work in the client, thus even basic interactions have a notable delay, depending on connection.



It shows you never used slow telnet sessions over modems.

There is no difference between doing this over text or graphics, in terms of the whole setup regarding network communications for data input and output.



Again: The key difference is that in VS.Code the UI runs local, thus all UI interactions are "immediate" and there is no difference between local and remote operation. Yes, IO has latency, but where possible that is hidden by the UI (possible: saving a file happens without blocking UI; not possible: loading a file requires the file to be loaded .. but even then the UI can already prepare the window layout)

Thisnis very different form a system, where each keystroke and each menu action has to be transfered first, before the remote side can identify the needed UI update and send that back



Again: learn UNIX distributed computing architecture.

Not going to waste more my time explaining this.



Telnet is a way more low level protocol. Please learn what you are talking about and have a good day.


Pjmlp is right. You need to read on how X was designed for remote work.


Johannes's point was, I believe, that using VSCode remotely works fundamentally different than using apps remotely via X. I don't think he is confused about how X was designed.


Designed badly, in this case.

Arguments to authority aren't appealing. Arguments from logic are. The fact is that X and VSCode's remote protocols are designed very differently, and in high-latency and high-jitter connections (and many low-bandwidth ones), VSCode's protocol is simply better.



VS Code isn't doing this with text or graphics, though. In X terms, it's running both the client and server on your local machine. It simply doesn't put the network boundary in the same place as an X application.

VS Code's "backend" that runs on the remote machine is rather only in charge of more "asynchronous" operations that aren't part of the UI's critical path, like saving files or building the project. It doesn't speak anything as granular as the X protocol.



Classic UNIX program architecture in distributed systems, apparently some knowledge lacking here.

Long are the days using pizza boxes for development it seems.



The comparison you made wasn't to arbitrary distributed UNIX programs, though. It was to X applications, which don't work this way.


I'm sorry to say I'm as confused as I was before I read these sentences.

Let me try to rephrase: with X Windows, the UI server runs on your local machine, while the UI client runs on the remote machine (e.g. your application's server). Is that correct?



No, the whole UI runs on the client machine, which in X Windows nomenclature is the server.

The client application (on X Windows nomenclature), runs on the remote server and is headless.

Instead of sending streams of bytes to render text, it sends streams of encoded X Windows commands to draw the UI.

Everything else regarding compilers, subprocesses and what have you keeps running on the server, regardless how the connection is made.

Think big X Windows terminals or green/ambar phosphor terminals accessing the single UNIX server, used by the complete university department.



I'm surprised pjmip is missing the point here. Or maybe I am

> Instead of sending streams of bytes to render text, it sends streams of encoded X Windows commands to draw the UI.

(Simplified) VSCode is sending no bytes to a server when you're editing a file. The entire file exists on the client, you can edit all you want and everything stays on the client. Only when you pick "save" is a data sent to the server.

My understanding with X Windows is as you mentioned above, you press a key, that key it sent app on another machine, that other machine sends back rendering commands. Correct? Vs VSCode, you press a key, nothing is sent remotely

Note: There's more to VSCode, while it doesn't have to send keystrokes and it is effectively editing the file locally (so fast). It does send changes asynchronously to the remote machine to run things like the Language Server Protocol stuff and asychronously sending the results back. But, you don't have to wait for that info to continue to edit.



Thanks for elaborating, it helped a bit and now this section of the Wikipedia article fully clicked for me:

"""The X server is typically the provider of graphics resources and keyboard/mouse events to X clients, meaning that the X server is usually running on the computer in front of a human user, while the X client applications run anywhere on the network and communicate with the user's computer to request the rendering of graphics content and receive events from input devices including keyboards and mice."""



Even in 2023 you can get vim to be more powerful than VS Code. But it's that much more difficult.

As the author states, IDEs haven't necessarily gotten a lot better, but imo advanced features have become a lot more accessible.



What does it mean "more powerful" ? Do you mean in terms of productivity ? It probably depends on your task anyways. In 2023, it's still a pain to have decent debugging in Vim. For pure text editing, I can believe you, but for software development, I highly doubt it.


Vim is a text editor, not a code editor. It has always been fundamentally designed this way.


> Even in 2023 you can get vim to be more powerful than VS Code. But it's that much more difficult.

I absolutely agree, assuming you're using "powerful" in the same sense as saying that a Turing machine is more powerful than a MacBook.



It's similar in outcome (doing "stuff" remotely), but not the same architecturally.

VScode runs on the computer in front of you, and it _does not_ send key-presses or other user input over the network at all. Instead VScode sends file-changes over the network to the remote, and executes commands on the remote (e.g. SSH's in and runs 'gcc ...').

With X, XEmacs is not running on the computer in front of you; it's running on a computer far away. Every key-press and mouse click must be transmitted from the computer in front of you over the network, received by the remote computer, then a response sent from the remote to the computer you're interacting with, where it'll be displayed.



I use that all the time in my hobby tinkering pseudo cloud server on a odroid SBC. It feels like I'm literally on that specific computer directly. Plugins like docker work as well


I've been wanting to try something like that with neovim's remote features, but haven't found the time. Has someone attempted this? If so, how successful was it?

I've always been a big user of powerful laptops because I do like the mobility (allows me to work/browse stuff outside my home office) and I dread the pains of properly synching my files across a laptop and desktop (not only documents/projects, but also configs and whatnot).



> I definitely disagree with this sentiment. At my last job, I had to do most of my work on a remote server (because it had a GPU), and I found VS Code far more pleasant to use than plain old SSH. People recommended using an editor on the server side or freaking around with Win SCP / Cyberduck, but VS Code was just so much better in so many ways.

I'm not familiar with VS Code setup for remote editing. Does it run LSP on remote and give you full hints, errors, etc. locally?

> As a bonus, you get fully native behavior and keyboard shortcuts that you'd expect from your platform. Things like text selection and copying just work, even some of your addons are carried over.

Selecting text with Shift+ArrowKey or something like that is not a "bonus", it is just a bad text editing experience. Keyboard shortcuts are the way they are on Vim/Emacs not because their developers can't figure out how to bind Ctrl+C/Ctrl+V...



> I'm not familiar with VS Code setup for remote editing. Does it run LSP on remote and give you full hints, errors, etc. locally?

Not sure about other languages, but when I use VS Code to develop Rust remotely, it prompts me to install the rust-analyzer extension (which is my preferred LSP server for Rust) to a remote whenever I'm opening a project for the first time. VS Code is able to distinguish between extensions that need to be installed on the same machine as the code (like the LSP server) and extensions that are just making changes to the local UI.

> Selecting text with Shift+ArrowKey or something like that is not a "bonus", it is just a bad text editing experience. Keyboard shortcuts are the way they are on Vim/Emacs not because their developers can't figure out how to bind Ctrl+C/Ctrl+V...

I use an extension for vim keybindings in VS Code. When connecting to a remote host, the vim plugin still works fine, itand doesn't prompt me to install anything on the remote side, since the changes are synced to the remote host at a much higher level than that (i.e. locally mapping "dd" to "delete this line from this file" and sending that to the remote rather than sending the remote the keystrokes "dd" and having the remote determine how to interpret it).



My understanding follows (I don’t use it but I’ve noticed the processes running on other people’s machines). Corrections welcome.

It’s split into a client (web frontend) and server that’s doing all the work. The server can be run anywhere but it’s effectively a bunch of stuff installed in a docker container. When you start an instance for a project, it creates a container for that instance with all the code folders etc bound in. LSPs are running in that container too.

It’s possible to use your own image as a base (you might have custom deps that make installing the requirements for an LSP hard, for example).

The trick they use here is that there’s some base container/volume that has most of the server stuff downloaded and rest to go. Whether you start a normal instance or from a custom image they do it the same way by just mounting this shared volume and installing what they need to bootstrap the server.

It also appears they create a frontend window per server process too. So the master client process starts, you select a project folder, they create a new server container and a new client window connected to it. The frontend client is local while each server can be anywhere (obviously you could run the client with X if you wanted to further muddy that).



This ability also proves useful when trying to do complex package management in an isolated manner with ROS; I ultimately used a remote vs code shell running off the robots OS to just have my ide recognize the many local and built dependencies that requires a full ROS setup.


Using the editor on the server from a remote connection is silly. However VSCode is not unique. On my local Emacs I use ssh via tramp [0] to browse files on the server and then edit localy. HOWEVER I also have physical access to my server. Emacs then gives me the added benefit of being able to run in terminal on the physical server without any window manager installed.

[0] https://www.gnu.org/software/tramp/



> Using the editor on the server from a remote connection is silly.

In my experience, this is the best way to do remote work. The alternative is to either not work with remote resources (data, hardware, etc), work locally and sync changes to remote, or work locally with a remote mounted file system (unless you need remote hardware).

For the parent, they needed GPU access, so they had to run remotely for hardware access.

I normally need particular data that is too big to move locally, so I like to work remotely for that reason. I could remotely mount drives via an SSH Fuse mount, however the IO speed for this method can quickly become a problem. For me, it is a much better experience to either use a remote web editor (rstudio server), VSCode remotely (which is a remote web editor over ssh), or vim. With web based remote editors, you still draw the screen locally, but get updates from remote. And more importantly, compiling and building takes place remotely.

I find this method much better than either pure remote access (VNC/RDC/X11) or local-only editing with syncing code and/or data. But it very much depends on your work. When I don’t need to work with remote data, a locally managed Docker devcontainer provides a much better development experience.



In my experience, it's the worst way to do remote work. There are so many better solutions.

If TRAMP is too slow, just mount the remote filesystem locally using FUSE somehow. Use SSH to run processes on the remote system like compile and run the program. No need to run the text editor on the remote system.

You can also do it the other way around: have your remote system load your local data. I developed a small bare metal OS this way. Ran the cross compiler locally, had the output go to some NFS mount which was also available via TFTP. Booted the target system with PXE.

Running a text editor on a remote system is good for one off things and maybe as a last resort, but that's it.



> just mount the remote filesystem locally using FUSE somehow

This is the step that never works consistently for me. There is always some amount of random extra latency that makes the this workflow painful. I work with some extremely large data files, so random access to these is the primary issue.

In general, the idea is that it is often better to do compute where the data already is. My experience is that you should also do the programming closer to where the data is as well. This tends to make an iterative development loop tighter.

But this is highly dependent upon what you’re doing.



That's a different thing, though. You don't edit the data in a text editor interactively, do you? I would do any interactive editing with a local editor and then fire off remote processes to operate on the data.

It's funny because my reasons against using a text editor remotely are exactly the same: to make the development loop tighter. I am very upset by latency and always try to remove it where possible. I think this is the kind of thing where we'd need to look over each other's shoulders to understand our respective workflows.



> You don't edit the data in a text editor interactively, do you?

That’s exactly what I’m doing. The code is written on the remote server. VSCode’s remote setup is actually very good at this. Mainly because, it is really a web editor that is hosted remotely and you use a local browser (Electron) to interact with it. The processing loop then happens all remotely.

But really, I’m talking more about data analysis, exploration, or visualization work. This is when I need to have good (random) access to 100’s of GB of data (genomics data, not ML). For these programs, having the full dataset present during development is very important.

If I’m working on more traditional programming projects, I can work locally and then sync, but recently I’ve been using more docker based devcontainers. These are great for setting up projects to run wherever, and even in this case, the Docker containers could be hosted remotely or locally (or more accurately in a VM).



Vscode remote has almost no visible latency period.


Have you actually used vscode remote? If not you should. If you have all I can is that I’ve personally used all the solutions you are mentioning and for me vscode remote is the top bar none even for very large repos.


Is there an efficient way to do "Find in files" from a vim or vscode instance running locally and editing+compiling remote files via ssh ? Preferably something that runs instantly for 1 GiB repos ?


Haven't tried on exceptionally large repos, but in VSCode since actual find logic is on server, it should work simply fine. If I remember correct, even on vscode.dev (in browser with no server), your browser downloads the search index and then search and navigation are fast. Though it may struggle with very large repos.


I’m not sure what you mean by vscode running locally with editing via ssh. I’m fairly certain that when you do a remote connection in vscode, it literally runs the vscode program remotely and you are just connecting to a tunneled web interface. The only thing running locally is the Electron browser shell. So, remote “find in files” is running remotely, so it should be as efficient as it would be from that side.

That said, you can also open a terminal in vscode and use grep. If you’re running remotely, the terminal is also remote. That’s what I normally do.



VS Code uses ripgrep under the hood (locally and remote).


I worked at a place that had a half built distributed system that we still needed to use (many bidders buying Ad space from a API based market). one great thing with tramp is that you can tramp into multiple systems simultaneously. So you are editing say files from 5 different systems (tweaking the yaml or whatever) at the same time. You could then start eshells on each of those systems at the same time. It made it really easy to adjust the settings and restart multiple apps really quickly (big screen, 5 files on top, 5 shells on bottom). I always get a kick out of people saying "you use that! you need to switch to editor X it has feature Y!" And me thinking yeah, that feature has been in emacs since before you were born. it is getting a bit crufty in its age though. Its main attraction is for people who like LISP. There a project called lem (IIRC) that is rewriting it in much higher performance Common Lisp.


Absolutely: https://lem-project.github.io/ Works for Common Lisp out of the box (it's a Lisp machine) and for other languages (LSP client).


> Using the editor on the server from a remote connection is silly.

Why?



Constant screen redrawing and input lag.


Which is not only not the case with VS Code, but that is explicitly explained in at the top of the thread.


> Which is not only not the case with VS Code [...]

Which is also immediately mentioned after the claimed that using a remote editor is silly.



Tramp is quite slow though, IMHO, and last I used it Emacs very much expects file access to by synchr.


Tramp has like four backends, try sshfs if ssh is too slow


People still forget Eclipse when it comes to a full-blown yet not bloated IDE. That thing consumes less than a bare bones VSCode install while running 5x the tools. It can handle everything from code to git to CI/CD and remote development since 2013.

I'm using it for 20 years and I think it's the unsung hero of the IDE world.

This article doesn't mention it either as a modern GUI IDE.



There’s a reason people don’t talk much about Eclipse these days and it’s because it was a pain to maintain back when it really should have shone.

I really wanted to like Eclipse but gave up on it a decade ago because it required constant management from release to release. I remember one job I had where I didn’t need an IDE all that often and I would spend nearly as much time configuring Eclipse again upon the next time I came to use it, as I was spending time writing code in it.

I’m sure it’s improved leaps and bounds in that time - 10 years is a heck of a long time in any industry, let alone IT. But I do know I wasn’t the only one who got frustrated with it. So myself and others switched to other solutions and never looked back.



I was there, but it has changed. "Four updates a year" was a great decision to make, to be honest.

It just updates now, and I export my installation XML and send to people when they want the exact same IDE I use.



I used to like Eclipse but honestly it was and still is a hog. At the time i used it in the late 2000s it was basically the best IDE for C++, having features that Visual C++ users either did not have or needed to pay extra for plugin to get. I used it at work then when everyone else used Visual C++.

However at home i had a computer i bought late 2003 (which was a high end PC at the time but still) and the program was so heavy i remember despite running it under a lightweight environment (just X with Window Maker) i had to choose between Firefox and Eclipse because otherwise things would crawl due to the excessive memory use both programs made :-P.

Eventually i switched to other IDEs and forgot about Eclipse. But i did try it recently again and while obviously doesn't feel as heavyweight as it did back then (i'm running it on a 8 core machine with 32GB of RAM so it better be), it still feels sluggish and startup time is still quite slow.

Also TBH i never liked the idea behind workspaces.

These days i don't write C++ much but when i do i use either QtCreator or Kate with the Clangd LSP (which i also use for C).



I think 9 seconds startup time with 1GB of memory use is pretty acceptable for an IDE at the size of Eclipse (just timed).

Considering I'm not closing it down for whole day when I'm using it, waiting for ~10 seconds in the morning is not that bad.

In 2003, Eclipse was at its infancy and was an absolute hog, I agree on that front.

Actually you are not expected to have "n" workspaces. Maybe a couple (personal and office) at most. Project relationships and grouping is handled via "referenced projects".

Kate is an awesome code-aware text editor. I generally write small Go programs with that, but if something gonna be a proper project, it's always developed on Eclipse.



There were a couple things going on in 2003.

First, it was quite common for a company to buy a developer the exact same corporate standard computer as everyone else. So lots of computers had limited ram to run things like J2EE, Lotus Notes, and Eclipse at the same time. It was painful.

The startup was always slow because it preloaded everything. This was a deliberate choice to not load things and interrupt the developer. Just don't close it all day and the experience was very good.

A plus compared to the standard of the day was that it ran native widgets. So doing something as simple as opening a file explorer to browse through your project was considerably faster than comparable IDE's at the time.

Personally, I loved the customization which was dialed all the way up. I could have multiple windows with different arrangements of panels within them, all saved. I haven't run across anything as configurable since then.

It also had the big benefit of their plugin system which shined when working with multiple languages in the same project.

It always felt to me like it became trendy to crap on Eclipse because of the slow startup time and it never could shake that.



> Considering I'm not closing it down for whole day when I'm using it, waiting for ~10 seconds in the morning is not that bad.

I tend to close and run the IDEs (and most programs) multiple times per day - a clean desktop kinda lets me clean/reset my thoughts - so long startup times are annoying. Of course i wouldn't avoid a program if it was responsive, fast and did what i wanted after it started up.

> Actually you are not expected to have "n" workspaces. Maybe a couple (personal and office) at most. Project relationships and grouping is handled via "referenced projects".

Yeah i also had a single workspace but i worked in a bunch of other things, including some Java stuff in NetBeans and i want to have everything in one place. I do use and prefer IDEs but every other IDE could just store projects wherever i wanted.



> I think 9 seconds startup time with 1GB of memory use is pretty acceptable.

9 seconds of startup time on a modern GHz computer is completely unnecessary and unacceptable IMO. There may be 9 seconds of work it wants to do at startup, but there's no way it needs to do it in a single thread before letting you start to interact with it. This is an optimization effort, nothing more. Give me a month with their codebase and I could get that down to under a second. (So could most decent software engineers.) It would just need to be something they actually put effort into.



In that 9 seconds, a Java VM starts up, starts up an OSGI compliant platform and loads all the plugins you have installed and enabled in that particular Eclipse installation. When the window appears 9 seconds later, the VM is warmed up, all your plugins + IDEs (yes multiple) are ready to use. No additional memory allocations are done WRT your development plugins. Also remember that these plugins are not in isolation. There’s a dependency graph among them.

In the following seconds, updates are checked and indexes of your open projects are verified and re-run if necessary which takes again

If you think that code is not optimized in the last 20 years, you’re mistaken. Many tools from Android Studio to Apache Directory Studio runs on that platform.

Nevertheless, I’ll try to profile its startup tomorrow if I can find the time.



It may not be about optimization, but about user experience. You may have to be clever and think outside the box. Can you save a snapshot of all that work so that the next instance doesn't have to do it before showing the window? And then assuming it has to do the work (which may not be necessary if it just started up--once a day is probably sufficient), it can redo the work in a separate thread.


Eclipse already does non-critical background tasks on separate threads, and non-critical startup tasks are done in "deferred early start" queue, which is emptied after initial startup.

Normally Eclipse IDE is not something like Vim, which you enter and exit 10 times a day. It just lives there and you work with it. 10 seconds in the morning for a tool that big is very acceptable, esp. after considering that everything is instantaneous after that 10 seconds.



Android Studio is IntelliJ


It was Eclipse when they first started. Still tons of IDEs run on Eclipse platform, too. Esp. on the embedded space.


It wasn’t “a hog” it was the hog. I don’t know where OP gets the idea that it was svelte. IntelliJ is considered a pig and a half in most eras but at the time, for most if not quite all projects, Eclipse had a worse memory footprint, for less functionality.

Also the UX was mediocre at best and infuriating at worst. Practically every interaction worth performing in that editor took at least one more click or keystroke than IntelliJ, and I would rank IntelliJ as merely good, but not amazing with input economy.



For about five years, my daily start of the day ritual was starting eclipse, going to a 10 minute standup, and coming back two minutes before it stopped loading. To be fair, it's probably better now, and I stopped doing Java work in 2014.


Anyone who thinks Eclipse is compact is hallucinating.


what I dont understand about java is why doesn't it just take what it needs? If I commanded eclipse to open, that's it. Open an editor, maybe 2-3 recent files, and let me move the cursor around. If IntelliJ isn't ready yet, so be it, but dont slow my UX down because it's running a bunch of services I didn't ask for. If I hit the IntelliJ autocomplete then fine, I'll wait if it's not ready, but until then, the editor frames should be just as snappy as notepad. Java doesn't put the user first!


One of the biggest tricks with Java IDEs was not giving them more memory, but giving them more initial memory.

Tuning startup heap size could cut upward of 40% off of startup and settling time.



Interesting. I hate Eclipse with a passion, I find the ergonomics so horrendous, and back in the days it was a hog. Maybe on today's hardware it's leaner than webkit based vscode. But the last time I tried to use git with it .. it made things 10x harder than the CLI. It was so bad that I developped RSI in 24h (and I'm a daily emacs user)


It’s possible that Eclipse has had a “Firefox moment” where someone carved it down to a lighter core, but I’ve no reason to check.

Seconded on the ergonomics. They were a joke. Longest inputs of any IDE I’ve ever used. If your sequences are longer than vim you need to get your head examined.



eclipse was a child of the java components era, even a trimmed down eclipse would still have tons of baggage

i really despised (to stay polite) everything about eclipse/java culture.. lots of generic layouts and components, nothing i cared about or bringing me dense information about code. way too much chrome and perspectives and what not. it was a cultural dead end, the people who "enjoy" working this way are on a different axis from me.. give me emacs+magit where things are right under your fingers and easy to extend.. and people using this kind of tools (i'm sure vim/neovim crowd likes that too even more) produce more tools of that kind



Sorry, but "not bloated" really doesn't enter my mind when I think of Eclipse. The few times I used it for Java programming, it took forever to start up, and the UI was laggy as hell during regular use. Granted, that was about 10 years ago, but on a (at the time) beefy Windows PC.


I (author) wouldn’t say I “forgot” about it. I was there when Eclipse became a thing, and my memories are all pretty grim. Difficult to use, slow, very resource hungry… so I never really paid much attention once I finished school. It probably is better now as others are saying, but I don’t know nor care at this point to be honest.


I started Android development with Eclipse. That IDE is a beast. People also forgot about Netbeans.


Netbeans was my absolute favorite IDE for Java development. After its last release, I honestly felt lost.

I’ve gotten back up to speed via IntelliJ but it still doesn’t feel as effortless as it did in Netbeans. And way less care and feeding than Eclipse.

Sorry, there’s a lot of “feels” in this post but for me, Netbeans was the one Java IDE that I didn’t have to fight with.



Yes Netbeans was very underrated, I used it for making Nokia ME apps. And learning Java.


Still is, quite a few features like Swing editors, or the two way editing between rendering templates and Java code, or the quality of profiling tools for such open source product.


My first Java IDE was Symantec Café (which became Visual Café). I haven't thought about that in 25 years.


I also used NetBeans a bit years ago, though that was mainly because it had a (mostly) WYSIWYG editor compared to Eclipse (technically Eclipse had a plugin for that which supposedly was also superior in how it worked - it parsed the code to figure out what the GUI would look like and updated in place instead of NetBeans' generating code with commented out sections you wasn't supposed to touch - but in practice it was both slow and clunky).

For Java specifically i felt NetBeans was faster and simpler though i bounced between it and Eclipse because i also used Eclipse for other stuff (C++ mainly) so unless i wanted a GUI i used Eclipse. I did stopped writing Java some time ago though.

I did try a recent NetBeans build but i found it much less polished than what i remember from before it became "Apache NetBeans".



What do you mean, “last release”? NetBeans 20 was released just this month. I still use it.


Apologies for not clarifying -- the last release of Netbeans prior to the Oracle acquisition of Sun.


I have good memories of Eclipse, from back when I was doing Java. I remember at the time it seemed everyone dissed it, much as it feels like everyone disses Jira now and for the last decade, but I liked it.


I still love Eclipse, and you can pry it from my cold, dead hands.

The last couple of years, however, it feels like Eclipse is actively getting worse. And I don't mean that it's lacking features. I mean that every new release seems to break something else.

I tried reporting some bugs, but that required signing some kind of soul-selling agreement with the Eclipse Foundation or some other nonsense.

I then tried fixing those bugs, but there is no up to date documentation on how to build the IDE from the myriad of repositories and modules. So I gave up.



My experience with Eclipse, about 10 to 15 years ago, was the exact opposite. It was incredibly bloated. With some combination of plugins installed, it became unusable. At a previous company, we were using some sort of Scala plugin, and Eclipse couldn't even keep up with my typing! I moved on to IntelliJ around that time.


All of the JetBrains users sitting around comparing notes, trying to figure out what was wrong with our coworkers that they thought eclipse was worth using, let alone defending.

JetBrains has plenty of problems, which they seem to want to address but I fear Fleet won’t fix, and I lament but understand people wanting something lighter these days, but eclipse isn’t even in that conversation.



Additionally, I always felt the whole Eclipse "user experience" was terrible. Setting up a project was a mess. The default layout left a tiny window for code. The default fonts were bad. I could go on.


"It's only free if your time is worthless."


Honestly, I feel like the primary reason why IntelliJ "won" over Eclipse and Netbeans was that it was first to market with a decent-looking dark mode. Back when Eclipse and Netbeans were as stark white as Windows Notepad... and caught with their pants down as developers abruptly decided en masse that white backgrounds were over, and every app needed to be dark mode first.

Hell, Eclipse STILL doesn't really have a nice dark mode. The actual editor view looks okay, but the dark mode feels very bolted-on to the surrounding UI.

I think this is the primary reason why VSCode is eating the world today. People will talk about the plugin ecosystem and all these other community inertia advantages. However, VSCode was exploding in popularity BEFORE that plugin ecosystem was in place! If we're really honest with ourselves, we flocked to because it was even more gorgeous looking than Sublime Edit, and without the nag modal to pay someone 70-something dollars.

Appearances MATTER.



Eclipse is the first thing that comes to my mind when I think of the most bloated and stodgy IDE on the earth.


Ha. I mostly used Eclipse in college. I learned how to compile programs from the Command Prompt (Windows user back then) primarily to avoid Eclipse LOL. It was dog slow and somewhat difficult to navigate


There does seem to be a lot of hate for eclipse. The complaint I always hear is that it is a pain to use. Personally I’ve always liked it, even though I’ve used the other popular IDEs.


Same here.

You will find old rants from me complaining about workspaces metadata, but that problem has been sorted for quite sometime now.



Agreed. And there's simply nothing that comes close to the power of the workspace when working on multiple projects that share dependencies.


The original idea was to replicate the Smalltalk image approach, but backed by a virtual filesystem instead.

Eclipse is Visual Age for Smalltalk reborn, after all.

It was common to have plugins corrupt its metada, but somehow it finally became quite stable.



But Eclipse was often laggy and slow. So it felt bloated to the users than VS Code which is snappier even though it is bigger


It was, for C++, for a couple of years, 12-13 years ago. It's neither laggy nor slow for the last 8-9 years. I've written my Ph.D. thesis on it, on C++, which was a sizeable high performance code.

It never crashed, allowed me to work remotely if required, integrated with Valgrind, allowed me to do all my tests, target many configurations at once, without shutting it down even once.

Currently it has a great indexer (which is actually an indexer + LSP + static analyzer and more), LSP support if you wish, and tons of features.

It gets a stable release every three months, comes with its own optimized JRE if you don't want to install one on your system, etc.

Plus, it has configuration snapshots, reproducible configurability, configuration sync and one click config import for migrating/transforming other installs.

That thing is a sleeper.



While Eclipse today is certainly a quite decent IDE, I use it mostly in the form of STM32CubeIDE[1] now, it was servicable at most back in 2005-2006 when I used it for some Java classes.

In any case, it's a younger product than the offerings in the article.

[1]: https://www.st.com/en/development-tools/stm32cubeide.html



> In any case, it's a younger product than the offerings in the article.

Yeah, but my gripe was about the closing of the article, which mentioned VSCode. I think the author just doesn't know about it.

Eclipse is my DeFacto C++/Python IDE and I'd love to develop a decent Go plugin for it, too. Maybe someday.



Not just C++. I used to use it for Java development and had the same experiences as the GP too.

I’m sure it’s really good these days. But I’ve moved on now and my current workflow works for me, so I don’t see the point in changing it until I run into issues again.



Java never got that slow, but it used to tax the system a lot in the earlier days, yes.

I developed Java with Eclipse, but the project I did was not that big when Eclipse was not its prime, and it was in its prime when I was experienced enough to be able to "floor it" in terms of features and project complexity.

Now it's just a blip on the memory usage graph when working with big projects, and way way more efficient than the Electron apps which supposed to do 20% of what Eclipse can do.



Eclips also had (have?) The very interesting mylyn plug-in which narrows down the code to the context your working within. Think collapsing everything in eg the project tree and also functions within files.

This context is built up based on what part of the code you work on.



I love eclipse, but it's unbearable on macos


How come? I use it regularly. Genuinely asking.


I don’t know. It’s even worse with IntelliJ. IntelliJ crashes regularly. It unbearable.

Running m1 sonoma



Interesting - I run Intellij Ultimate on Macbooks (both Intel and m2) and never have a crash. Infrequently run into bugs when upgrading the ide or 3rd party plugins; that requires some sort of cache invalidation or project reimport (couple times a year), but it's pretty smooth sailing for something I use across many different projects and languages. Java, kotlin, TS, python, groovy, shell scripting, json/xml/yaml/html/tsx are all generally touched 40+ hours on a weekly basis - it just works.

I do agree intellij is memory hungry with multiple projects open and a variety of languages involved, but RAM is cheap enough (and VMs/Docker/K8s hungry enough) that I just don't buy a machine with less than 32GB anyway, so I give intellij up to 6 GB and never give it another thought.

I don't do much android development, but do find Android Studio to feel clunky and slow at times, guessing because of the heavy integration with Android dependencies and emulation, but not really something I know enough about to comment with any sense of authority.



How so? Use it daily, with hundreds of open projects and it just flies.


I think you are mistaken, Eclipse takes up 3 times the ram VSCode does...I can use VSCode using only 6gig ram even with big projects with native code such as kotlin, java, c, swift, etc..Eclipse will not run on 6 gig ram neither will jetbrains or android studio.


My system monitor says it's using 1.3GBs after warming up, and even forcefully reindexing a big C++ project.

I don't think VSCode will use 400MBs with that amount of code, plus electron, plus all the LSP stuff you run beneath it.

In that state Eclipse will fit into a 6GB system just fine. I'd love to try that at a VM right now, but I don't have the time, unfortunately :)



If memory serves, fully loaded Eclipse would take about 20-25% more memory than IntelliJ, which was itself rightfully called greedy.

At the time most of us felt it was worth the cost of entry for all of the tools you got, which eclipse had a subset of.



I should fire it up, I haven't tried it in a while. It was the only thing I could use that seem to accurately (more or less) index large projects that you, uh, had some issues compiling and just want to navigate around and look through the code. now I mostly just use rg for big projects, inside of neovim


I’ll raise you NetBeans to that.


A blast from the past there. I used Eclipse for Java in its infancy while I was at university and thought it was decent enough compared to whatever version on emacs would have been on whatever version of Solaris was on my CS department servers.

A couple of years later I started an internship at a bank and spent ~3 hours trying to get a project building before someone introduced me to IntelliJ, which I still use every day almost 20 years later!



VSCode is really a text editor-in-IDE-clothing. Also, it's an Electron app and those are notoriously resource heavy.

~20 years ago I became an early IntelliJ user. From version 3 maybe? It's hard to recall. I've never looked back.

But I did try Eclipse and... I never got the appeal. For one, the whole "perspectives" thing never gelled with me. I don't want my UI completely changing because now I'm debugging. This is really part of a larger discussion about modal editors (eg vim vs emacs). A lot of people, myself included, do not like modal editors.

But the big issue for Eclipse always was plugins. The term "plugin hell" has been associated with Eclipse for as long as I can recall. Even back in the Subversion days I seem to recall there were 2 major plugins (Subclipse? and another?) that did this and neither was "complete" or fully working.

To me, IntelliJ was just substantially better from day one and I never had to mess around with plugins. I don't like debugging and maintaining my editor, which is a big reason why I never got big into vim or eclipse. I feel like some people enjoy this tinkering and completely underestimate how much time they spend on this.



For me, perspectives are perfect, because it provides me a perfect set of tools for everything I do at that moment. It's probably a personal choice, so I agree and respect your PoV.

The plugin conflicts were way more common in the olden days, that's true, however, I used subclipse during my Master's and it was not incomplete as my memory serves. It allowed me to do the all wizardry Subversion and a managed Redmine installation Assembla had to offer back in the day.

It's much better today, and you can work without changing perspectives if you prefer, so you might give it another shot. No pressure though. :)

Trivia: VSCode Java LSP is an headless Eclipse instance.



At a minimum, perspectives play very nicely with the plugins system.

Eclipse was created over that extremely interesting idea that you can write a plugin to do some completely random task, and have all of it reconfigured on the perfect way for that task.

But you can't have a rich ecosystem of plugins without organizing them in some way, and nobody ever created a Debian-like system for them as it's a lot of thankless hard work.



I’ve been using vscode for a few years now and while i find its search amazing, it doesn’t do much more for me. Its syntax highlighting is good, but the auto complete recommendations have been driving me insane recently.

Writing rails api with a nextjs ui, anyone got any suggestions on alternative paths i should take?



JetBrains solutions. It think it's called RubyMine.


This may not apply to you but I find it so weird how many programmers won't invest even a modest amount into software they'll use 8 hours a day every day. Particularly when we'll so easily spend money to upgrade RAM or buy a new PC.

RubyMine on a cancel anytime personal license is $22.90/month (or $229 for a year). That's nothing. I'd say just try it. If you don't like it, you might only be out $23.

I'm not a Ruby person so can't comment on that really. For Java (and C++) it's a lifesaver. Things like moving a file to a different directory and it'll update all your packages and imports. Same with just renaming a class or even a method.

The deep syntactic understanding Jetbrains IDE have of the code base is one of the big reasons I use them.



> People still forget Eclipse

thank god



For me, the closest modern successors to the Borland suite are Visual Studio (not VSCode) and the Jetbrains IDEs. The feel like they're the only one with a holistic, batteries included, design that actually focuses on debuggability.

I actually feel that the terminal-based focus of modern FAANG-style development actually hindered proper tool development, but I was never able to explain it to anyone that hasn't used Borland C++ or Borland Pascal in the past, except maybe to game developers on Visual Studio.



C++ Builder versus Visual C++ for RAD GUI development.

I never understood why Redmond folks have so hard time thinking of a VB like experience for C++ tooling, like Borland has managed to achieve.

The two attempts at it (C++ in .NET), and C++/CX, always suffered push back from internal teams, including sabotage like C++/WinRT (nowadays in maintainance as they are having fun in Rust/WinRT).

The argument for language extensions, a tired one, doesn't really make sense, as they were Windows only technologies, all compilers have extensions anyway, and WinDev doesn't have any issues coming with extensions all the time for dealing with COM.

Or the beauty of OWL/VCL versus the lowlevel from MFC.



DevDiv vs WinDev. The Windows group maintains the C++ compiler. So you get the resource editor for dialog templates and that’s about it. And that actually got worse from Visual Studio .NET onwards, my guess is that it got took over by the DevDiv people when they unified the IDEs.


Yes pretty much that.

Windows could have been like Android, regarding the extent of managed languages usage and NDK, if DevDiv and WinDev had actually collaborated in Longhorn, but I digress.



out of the loop, how is terminal-based development related to FAANG?


I guess it's caused by the "brogrammer" culture of Silicon Valley, where you would get hazed if you dared using a GUI-based tool. Also, being more focused on open-sourcing their tools (because other companies do not open source them, therefore being un-cool), which begets a "simpler" and "engineeristic" approach to UX, which do not need UI experts and designers.


Lots of companies end up with their own internal tooling. They have their own build systems, packaging systems, release systems, version control, programming languages, configuration languages, everything.

Some even have their own editors.

There is a lot of value in picking a transferrable editor and using that. From that point it becomes "what is the best editor that will _always_ be available". Emacs/Vim fit that.

Then the muscle memory can begin to grow, and there is one less bit of friction in starting a new job.

One of the best pieces of advice I received was "pick an editor and go deep".



> One of the best pieces of advice I received was "pick an editor and go deep".

Agreed, I'd be infinitely less productive if I couldn't use the editor I learned to master in the past 20 years.

A corollary to that would be "pick a company that lets you use your own editor". There's lots of friction from IT departments towards emacs and vim. The package/plugin system is a security nightmare with lots of potential supply chain attacks and more importantly no trusted vendor to blame when something goes wrong.



It became sort of a hackerish trend in the past decade, usinga hyper customized (neo)vim in lieu of an IDE.


Except maybe Apple, all the others are service-oriented companies. They run heterogenous pieces of code on their servers and their ideology is “move fast and break things”. It’s a hipster culture that reinforced the use of 1980s “video terminal” editors and CLI tooling because they were supposedly more flexible for their workflows.


I loved Turbo Pascal, but to me the high point of Borland's tooling was Delphi (1995). I don't want to sound like old man yells at cloud, but every time someone says that building GUIs with Electron is so easy compared to native apps, I just wished they experienced Delphi in its prime.

There are some very short/simple demos on YouTube:

https://www.youtube.com/watch?v=m_3K_0vjUhk



> but every time someone says that building GUIs with Electron is so easy compared to native apps, I just wished they experienced Delphi in its prime.

Every time someone says that, I mention Lazarus. I stll get a thrill out of using it (one of my github projects is a C library, and the GUI app is in Lazarus, which calls into the API to do everything).

The problem I find with Lazarus is that it seems to be slowly dying; yes, they still work on it, but feature-wise they are very behind what can be done with HTML+CSS and a handful of js utility functions.

A wealthy benefactor could very quickly get Lazarus to the point of doing all the eye-candy extras that HTML+CSS let you do (animated elements, for example).



Looks pretty similar to C# WinForms that ships with Visual Studio. https://youtu.be/n5WneLo6vOY?si=maped85dMX90KIn1


I will happily fill an hour with trash talking Microsoft, but getting the father of Delphi on board is one of the shrewdest things they’ve managed. I wish he’d found a different project to sink his teeth into though.


They can still experience it today with the community edition.


If you can agree to their very strange terms and conditions.

Or, use Lazarus/Free Pascal, which is almost identical, except for the documentation, which needs a massive overhaul, in tooling and content.



Not everyone is religious against such agreements.

Those can profit from very latest version.



At least the programs in the screenshot have actually useful and visible scrollbars. Seriously, scrollbars are super useful and should never be hidden, they both provide information you want to see and actions you want to do, why is everything trying to make them as subtle as possible today, even most Linux UI's which I'd expect are normally made more for usefulness than "design trends"?


GitHub's Android app doesn't even show scroll bars. And no scroll grab or snapback in apps even when there is a scroll bar. Am I the only person who scrolls back to check something and wants to quickly return to where I was in a document? Even if just FF on Android had this I would be happy.

On desktop we can drag scrollbars but I can't imagine what it's like to use modern 4-8px action area scroll bars if you have fine motor control challenges.

I just don't understand how we got to this point. Do people not use the apps they write?



This must be a bug though. If you unfold hidden comments, you jumpt to the BOTTOM, where you just WERE, rather than the top. So you scroll up, with no scrollbar, frantically, because you don't know how far you have to go. Until you reach the top - and you drag down ONE MORE TIME, because you're scrolling frantically, so the whole thread reloads, and everything is folded again, and you're back where you started.


On Linux this depends on your theme really, all the themes i use have scrollbars - e.g. here is an example with Gtk3 (which IIRC introduced the "autohiding scrollbars" to Linux desktop)[0]. It is "cdetheme-solaris" which i think is from [1]. I might have modified it a bit though. Though normally i use Gtk2 apps with a modified "cleanlooks" theme (a screenshot from Lazarus[2] i made a couple of days ago shows it - including the scrollbars :-P).

[0] https://i.imgur.com/CAyu5Ay.png

[1] https://github.com/josvanr/cde-motif-theme

[2] https://i.imgur.com/Yw1tTcD.png



Moreover, make the scrollbars big enough for my thumbs on my touch screen. Or at least make it optional.


NeXT had a terrific IDE 30 years ago, called Interface Builder. More info: https://arstechnica.com/gadgets/2012/12/the-legacy-of-next-l...


The IDE was Project Builder, Interface Builder was a separate, as the name implies, builder for interfaces that could then be connected to Obj-C code in Project Builder. They continued as separate apps even after Apple bought NeXT and shipped Mac OS X, until they unified the two into Xcode.


The two most indispensable programs for me are "midnight manager" for Linux/OSX and the like, and "Far Manager" on Windows (and mc there too under wsl2).

I'm lost with Far Manager at work. I've stopped using Explorer long time ago (and use it only to verify some reported workflow from other users, or some obscure thing not possible in Far Manager).

Other tools: tig (pager), emacs, mg (mini-emacs) - I wish that was available on Windows too without needing msys2/cygwin, and few others.

Back in the DOS days - it was Norton/Volkov Commander, PC Tools, but also Turbo/Borland Pascal 3, 4, 5; the most awesome E3 editor (from this series - https://en.wikipedia.org/wiki/E_(PC_DOS) ) - and many more



Even if VSCode/other IDE had features that blew Neovim out of the water, I don’t think I’d move over. The customizability, modal aspect, and open-source-ness are huge for me. I can create macros on the fly for repetitive tasks, can navigate without ever having to stall my train of thought to touch my mouse, and customize every single key binding and the code that it runs. I can create custom commands to do every single thing I’ve ever conceived of wanting to do. I can upstream bug fixes and see those others have suggested that haven’t been up streamed yet. I will concede that for some, maybe this is too much work to set up “just a text editor”, but I enjoy it, and I spend most of my day editing or viewing text, so to me, it’s worth it.

If there is one thing I’ve learned in my years of software engineering, it’s that everyone prefers different workflows. Let people build their own, however they want, with whatever tools they want, and they will be happier for it.



Almost everything you wrote is available for any modern IDE, and modern IDEs, thankfully, don't assume that your code is text. So they give all the things you mentioned and superior tools to work with code out of the box: anything from refactoring to code analysis to several types of semantic search to...


I am easily distracted (as in, ADHD-like), I enjoy very sparse work spaces in general. Tools with lots of icons, windows, and other widgets are very uncomfortable to me. I prefer typing commands, I believe a well written command language and a good search function are more comfortable. I will go to great lengths to avoid some tools if it means avoiding a clickodrome : for example, when coding for STM32 devices, I prefer bare metal GCC + makefile over STMCube.

To each his own tho, it's nice to have different tools for different peoples.



30 years ago, THINK C for the mac was already nearing discontinuation. It was a great compiler plus plus graphical IDE with debugger for its time. Hard to find info about it, but this site has some screenshots of the various versions:

https://winworldpc.com/product/think-c/4x#screenshotPanel



Yes! I absolutely lived in Think C for many years. You’re right though, it was on the way out by then, supplanted by CodeWarrior and MPW, which were both really good too.


I hope someone from Embarcadero is paying attention to this thread. They have had some great IDEs but their primary attraction was the price point and the ease of use of the products. Please make Delphi affordable again.

Considering that Delphi can be used for Android, IOS and Linux development as well, it would be a great tool - if it weren't for the insane pricing.



Visual Studio and XCode are the closest experiences to “first-class IDEs”, reminiscent of the Borland stuff from the early 1990s. They offer tight integration with the native toolchains and a set of menus that mostly make sense. Environments like VSCode or Emacs are a generic platform for text editing and file manipulation, a lowest common denominator for a variety of languages, workflows and tastes.


Try Eclipse, or Geany if you want something very small, yet powerful for its size.


How odd to find a mistake in 30-year-old Turbo C++ man pages from a screenshot. printf and vprintf send send formatted output to stdout of course, not stdin.


Vim is the only tool I've been able to use at every place I've ever worked at, from intern to staff engineer in three FAANG companies. I've watched tool teams spend months integrating the latest GUI editor, only for it to get killed by corporate acquisitions and replaced with N+1 that offers an almost identical feature set.

Meanwhile there's always a community of vim and emacs users who build all the internal integrations by themselves. Vim and Emacs aren't editors, they're platforms and communities, and the benefit of using them over VSCode of JB is that you get to be a part of these communities that attract the best talent, give the best troubleshooting advice, and share advanced configurations for everything and anything you could possibly ever want. They are programmable programming environments first and foremost, and that attracts people who are good at programming and like to hack on stuff.

Technologists who choose the propriety path of least resistance when it comes to their most important tools I think are ultimately missing out in a lot of ways, least of all is the actual editing experience. A craftsman should understand his tools inside and out, and picking something you can't fully disassemble and doesn't have the breath of knowledge a tried and true open source tool ultimately becomes just as frustrating as the initial learning curve of these older tools.



Changing IDE isn't that big of a deal.

I would much rather have to spend a few days to relearn a tool every few years and to get the benefit of that tool, than accept a lower quality tool just to avoid a few days work.

If you worked in C++, then visual studio has been around for 20 years - visual C++ for 10 years before that. If you use java, then intellij has been around for 20 years. Pycharm for 15 years. If you're writing JavaScript, I don't know what to say because the framework du hour has changed so many times in that time frame that I don't think the tool saves you much.

> Technologists who choose the propriety path of least resistance when it comes to their most important tools I think are ultimately missing out in a lot of ways, least of all is the actual editing experience

Equally, I can say purists or idealogists are so concerned with theoretical changes and breakages, and so afraid of the possibility of something changing that they miss out on game changing improvements to tooling.



I think the way people use IDEs is a lot deeper than just reducing them down to "purist" or "ideologist". That sounds a tad bit dismissive for something that is essentially your trade tool. It's akin to saying all keyboards are created equal because they have the same keys. The way you lay the thing out and the perspective that you build it for matters quite a lot. Distilled in a quote, "the whole is greater than the sum of the parts."

I got used to JetBrains' key mappings when I was at my last company, I also adored their debugger. My new company uses VSCode and I started down the venture of remapping all of them to JetBrains keys. I ended up with a lot of collisions and things that no longer made sense because the keys were mapped using a different perspective when laying them out. I'm sure I'm not alone being in a pool of engineers that primarily navigate using their keyboard.

VSCode's debugger is better now, but it still doesn't really stand up to JetBrains'. On the other hand, launching VSCode on a remote box is much easier and their configuration is much more portable with their JSON based settings files. I like using VSCode, but it took me months to get up to speed with how I navigated, and more generally operated with, JetBrains' IDEs.



> I would much rather have to spend a few days to relearn a tool every few years and to get the benefit of that tool, than accept a lower quality tool just to avoid a few days work.

I've worked with engineers who studiously avoid configuring any sort of quality of life improvements for their shell, editor, etc. because they claim it makes things easier when they have to use an environment that they can't as easily configure, like sshing into a shared box without a separate user for them to customize. This mindset has always been hard for me to understand; not only does it seem like they're optimizing for the rare case rather than the common one, but it seems like they're actually just lowering the quality of their normal experience in order to make the edge cases feel less bad without actually improving them at all



> I've worked with engineers who studiously avoid configuring any sort of quality of life improvements for their shell, editor, etc.

This is me. It's really easy to explain my mindset here, though. If I get used to a nonstandard tool or tool configuration, then it causes real productivity issues when I'm using a system that lacks that tool or the ability to customize.

This is not a rare edge case for me at all. I constantly work on a half dozen very different platforms. Having each platform work as much like the others as possible is a quality of life improvement, and improves the efficiency and quality of my work.



But isn't that why we have config files that you can easily copy to a new system?

But I guess you have to look at how much time you'll gain from your copying your personal config, vs the overhead of copying your personal config itself. If you often switch to a new system where you would have to copy the config file, but if you only really edit 2/3 files on that system for which a personal config won't have much benefit, then it is understandable. If I need to setup a new server for example, that doesn't need to be configured heavily, but just some installs and small config changes, and won't have to touch it anymore after that, then why would I spend time putting my personal editor config there? But if I have a personal computer that I use daily, and I edit a lot of code, it would benefit me greatly to optimize my editor for my use cases. And whenever I get a new PC or I get a PC from work for example, I can just copy my config and benefit from it.



I’ve usually had this attitude in the past and for me it came from my background: I worked IT, help desk, PC repair, etc for years before I got into programming and none of those contexts allow customization. And then for a while I did infra work where I’d be ssh’d into a vanilla VM or container instance to debug a deployment issue. Even though I mostly work in my own bespoke environment now, I still try to stay fairly vanilla with my tools. It’s pretty nice to be able to reset my workstation every year to get a clean slate and know that it’ll only be a couple of hours of customization to get back to what I know.


> optimizing for the rare case rather than the common one

This has been a common enough case for me to not get too used to super customized local environments. I'd rather learn the vagaries of commonly available tools than build myself a bespoke environment that I can't port anywhere easily. That's not to say I don't do any QOL changes but I try to be careful about what I end up relying on.



A person using Vim or Emacs has had best in class integration with the unix environment, modal editing, and remote development. Today, both editors have integration with VCS via fugitive or magit, fuzzy finding, LSPs, tree sitter, and code generation tools using LLMs. These tools have not stagnated, they've continued to evolve and stay best in class in many areas. So the "one tool is better than the other" argument doesn't really sway me. My point still stands that the community and open architecture are more important than any one editing feature.

> Equally, I can say purists or idealogists are so concerned with theoretical changes and breakages, and so afraid of the possibility of something changing that they miss out on game changing improvements to tooling.

Blindly following the crowd is also dangerous. Making choices based on principle is what allows good things like open source communities and solutions not swayed by corporations to exist, even though they might require more up front investment.



> My point still stands that the community and open architecture are more important than any one editing feature.

No, it doesn't, because it's essentially a matter of opinion, not an objective fact that can be measured and proven. You prefer to have an open architecture and a community of enthusiasts. I prefer to have most of my editor features available out of the box, and modal editors just confuse me.

At the end of the day, developer productivity is not a function of their editor of choice, so what matters is that each developer is comfortable in the environment they work in, whether that be Vim, Emacs, IntelliJ, or VS Code.



Learning curves are uncomfortable, so by your logic we should all always take the path of least resistance and use the tool that makes things easy up front without considering the long term benefits of using something like Vim or Emacs. I find this to be counterproductive to having a great career as a software engineer.

Rapidly assimilating difficult to understand concepts and technologies is an imperative skill to have in this field. Personally, I find the whole notion of Vim being difficult to learn, or not "ready out of the box" perplexing. Writing some code that's a few hundred lines or less, where it's mostly just importing git repos, is easy. Vim has superb documentation. How hard must regular programming be if it's difficult to just understand how to configure a text editor?



It's not that configuring the editor is hard, it's that it's unnecessary—the only thing you've been able to identify that I'm missing by using IntelliJ is an ideology and a community, neither of which are important to me in a text editor.

If it matters to you, that's fine—use whatever you're comfortable with! I just don't understand why you feel the need to shame others for choosing to focus their energy on something else.



The problem with these tools is that despite having worked with computers for 35 years, I don‘t get them. My brain is not made for them.

I only use out of the box vim when I work on consoles (which is still a fair amount of the time), I can exit (hey!), mark/cut/copy/paste (ok, yank!), save, and find/replace if I must. Everything else is just beyond what my brain wants to handle.

A lot of Jupyter lab and some VSCode otherwise. I can‘t say I know all about those either.

The last IDE that I knew pretty well was Eclipse, in about 2004. I even wrote plugins for it for my own use. That wasn‘t too bad for its time, I don‘t quite get why it got out of fashion.



There are those of us that still use it :) productivity gains lie elsewhere. And running various maven and git commands from command line instead of clicking around... something about keeping the skills in better shape


>I would much rather have to spend a few days to relearn a tool every few years and to get the benefit of that tool, than accept a lower quality tool just to avoid a few days work.

Vim police has issued your red warrants. Justice will be served.

Jokes aside, I'd say yes. I have worked with Eclipse, NetBeans, JB and nowadays I'm happy with VS Code. For a polyglot, it's the best out their at price point $0.0 and tooling is pretty good for me.

I'm doing Python, Go, Typescript and occasional Rust without missing anything.

Few keystrokes faster with command kata would not going to save years of labor. Actuall effort in software engineering is not in typing text or in text editing. Not at all. The battle is far far beyond that and bigger than that.

EDIT: Typos



just to add my $0.02 WebStorm makes writing NodeJS feel a lot more like Java - it's pretty good.


The best part is, you don't need to choose between using IDEs and your favorite text editor! Most modern IDEs with proper plugin support can be configured to provide a superset of Vim's functionality. I personally use IdeaVim on the IntelliJ Platform [1] with AceJump [2] and haven't looked back. You can import many of the settings from your .vimrc and it interoperates fairly well with the IDE features. Although I prefer Vim keybindings and it is technically possible to hack together an IDE-like UX with ctags and LSP, the IDE experience is so much better I wouldn't even consider working on a large Java or Kotlin project from the command line.

[1]: https://plugins.jetbrains.com/plugin/164-ideavim

[2]: https://plugins.jetbrains.com/plugin/7086-acejump



I think you missed the point of my post. The value of Vim/Emacs isn't the modal editing or key chords. It's the community and architecture, which you lose if you're still using JB with a frankenport of Vim on top. In fact, I think what you're suggesting is the worst of both worlds - a reimplementation of Vim on top of an already resource hungry and complicated IDE that's supposed to let you do things with a mouse. So you're left guessing whether something from real Vim will be the same in your Vim, plus you now have two competing environments for ways to do things, and you have to wait for JB to implement features that (neo)vim have already implemented, without supporting the opens source communities that did the work in the first place.

You also lose the killer feature of Vim, which is being able to work over an SSH connection on any sort of device, even those that don't have a GUI.



> You also lose the killer feature of Vim, which is being able to work over an SSH connection on any sort of device, even those that don't have a GUI.

In the last decade, I can count on one hand the number of times I have SSH'ed into a machine to do actual editing - and in every situation, nano would have been totally fine. Crippling my workflow so I can handle the most obscure scenarios that we've moved past for the most part, is not a good decision



> In the last decade, I can count on one hand the number of times I have SSH'ed into a machine to do actual editing

And I need to do this multiple times every workday. Generally speaking, this isn't an obscure scenario that we've mostly moved past. It's just not a common scenario in your particular work environment.



> I can count on one hand the number of times I have SSH'ed into a machine to do actual editing

Both companies I've worked at previously employed zero trust networking. That means developer laptops don't have privileges to things like secrets management infrastructure or even feature flag config. You end up making a choice: mock services that require trust, which comes with its own set of dangerous tradeoffs, or build remotely in a trusted environment. Many devs choose the latter.



As I said, I have worked at several FAANG companies where people had to wait for editor integrations because the source repos were so big you couldn't work locally. Having a tool that works everywhere no matter what has been incredibly valuable to my career. I also wouldn't say working for one of these companies that pays very well and handles a large portion of the world's traffic is obscure.


decade and a half for me and even then I remember telling my tech lead, this feels like the stone ages


> The value of Vim/Emacs isn't the modal editing or key chords. It's the community and architecture

These are empty words that have no meaning. I don't use my IDE for "community" or for "architecture". I use my IDE for writing code, navigating code, finding code, refactoring code, exploring unknown code bases, analyzing code, moving code around, reading code...

How many of those things have the words "community and architecture" in them?

> you have to wait for JB to implement features that (neo)vim have already implemented

You mean the other way around. Nothing NeoVim implements trumps the depth and breadth of features IDEA offers out of the box. NeoVim (and others like vim and emacs) is busy re-creating, with great delay and poorly, a subset of a subset of features of a modern IDE.



I assure you, the open source community around modern IDEs is thriving. I see plenty of innovation in plugin marketplaces that is hard to find in even the Emacs/Vim ecosystem. Despite its share of detractors, there is a lot of value in having a business model that prioritizes language support, platform stability and a well-curated plugin marketplace. The IdeaVim integration is thoughtfully designed and I seldom notice much difference coming from Vim. I see where you're coming from with resource consumption, but even Fleet is starting to offer rudimentary Vim support, which I expect will address many of the issues around bloat. [1]

[1]: https://youtrack.jetbrains.com/issue/FL-10664/Vim-mode-plugi...



I'm a happy ide with Vim bindings guy. We do exist.

I think in vim edit patterns when editing text, but I don't particularly care about most of the : commands. I'm happy to use the vscode command palette for that.



> You also lose the killer feature of Vim, which is being able to work over an SSH connection on any sort of device, even those that don't have a GUI.

The gold standard for remote development is Visual Studio Code. All of the UI stuff happens locally, and it transfers files and runs commands remotely. It's way less chatty than going over SSH or an X11 connection.



I heavily disagree. From experience, working over SSH with tmux allows me to work with my editor, run commands, start up various qemu instances, start debuggers etc, and other tools that have their own TUIs. I think remote VSCode makes sense to people who have very narrow needs to edit specific projects rather than live on a remote machine.


The terminal window from VSCode still gives you all of that, with some extra ergonomics from the GUI. No need to remember ctrl b + % to split a Tmux window, scrolling and find just works, no need to install plugins to save sessions.


Can you save your session? I think I have a tmux session running for months now on my vps. Everything is exactly the same when I connect.


It's very telling who has had to actually support prod systems and who hasn't when it comes to this topic. Very often the only interface we used to have before better CI/CD pipelines and idempotence was a serial tty or ssh connection. There were a lot of sysadmins run off in the 00s (for various reasons such as increasing salaries) and a lot of institutional knowledge about real operations such as this was lost or diluted.

Another reason why I like to encourage people to not customize their vim/emacs too much (at least for the first year or so of learning) - because when it's 0300 and prod is down you don't want to fight your tools to match your expectation. Another example while HN loves to hate on bash, but I love and live in bash.

The names and titles have changed, but I still see the same dev/ops battles.



Mileage varies, even the most ardent vim user I know gave up and switched to VS Code this year. It's just too much to try to keep up with when projects and technologies change. I've programmed in C++, Go, Python, Java, and Angular just in the last year. I can believe that there's vim plugins to handle all those, but the energy it would take to find auto-complete and navigation and formatting and debugging and any number of other out-of-the-box IDE functionality is more than I'd like to thing about. Then there's the associated tools - Kubernetes yamls, swagger, markup, Makefiles. In IDEs they are only one plugin download away.

I love vim, I used it exclusively for years when I was doing C/C++. I still ssh into servers a lot and use it pretty much daily. Still, I'm far to lazy to try to turn it into my full time development environment .



> I can believe that there's vim plugins to handle all those, but the energy it would take to find auto-complete and navigation and formatting and debugging and any number of other out-of-the-box IDE functionality is more than I'd like to thing about.

Well, I'll be the bearer of the good news, then!

NeoVim has a native LSP client which unifies all auto-complete/navigation/formatting into a single plugin, only requiring you to install per-language LSP server.

As for debugging, there's also DSP (Debug Server Protocol) which NeoVim doesn't have native support for, but there's a plugin for that.



There’s more to language support than LSP.

I use vscode and IntelliJ these days. In rust, IntelliJ lets me rename functions and variables across my entire project. Or select a few lines of code and extract them into their own function. It’ll even figure out what arguments the function needs and call it correctly.

I’m writing a paper at the moment using vscode and typst (a modern latex replacement). The vscode plugin shows me the resulting rendered pdf live as I type. I can click anywhere I want to edit in the pdf and the editing window will scroll to the corresponding source text.

Maybe there’s ways to do all this stuff in vim but I never found it. I used vim on and off for 20 years and I barely feel any more productive in it than when I was 6 months in. As far as I can tell, IntelliJ is both easier to learn and more powerful.

Making nontrivial software in vim just doesn’t feel productive. LSP is the tip of a big iceberg of features.



The rename across the project scenario is a LSP feature that Neovim supports. I use it frequently. I do miss the ability to trivially extract a function. I used to do that all the time in Visual Studio back in my C# days.


Yeah; and there's so many little features like that - most of which I use rarely, but they're still useful. Aside from rename, there are:

- Change function arguments. (Eg, reorder the arguments of a function and update all callers)

- Add a new function argument. Eg, if I change a call from foo() to foo(some_int), the suggested actions include adding a new parameter to foo with some_int's type.

- Contextually fill in trait, struct, or match statements

- Move a bunch of stuff to a new / different file

- Organize imports

- Run a test. Anything function with #[test] and no arguments can be run or debugged instantly with the click of the mouse.

- Up and down buttons for trait implementations. See the definition for a trait method, or jump to one of its implementations.

I have no idea how to do any of this stuff in vim. Maybe its possible with enough macros and scripts and mucking about. I really do admire vim's tenacity, but seriously. The time spent learning a modern IDE pays dividends in weeks and our careers are measured in decades.



Emacs also has LSP support built-in with eglot and a good start for treesitter support.


In my anecdotal experience the best developers are the ones that don't overly focus on their tools at all. One of the most proficient developers I've known was perfectly ok programming on a little square monitor with Visual Studio where the code window was 40% of the screen real estate.

It doesn't have to be that extreme but it reminds of hobby craftsman who focus on having a garage full of the tools of the trade while never finding the time to work on a project with them.



At some point in a developer career one shifts from a tool focus to a work focus.

I used to be picky about my operating system and often would spend time making the tools that I wanted or preferred to use work within the project dev environment, as opposed to just using the tools my employer provided. It usually ends up just being easier, and if everyone is using the same tools then pair programming or collaborating becomes easier, too, as compared to having to deal with the one stubborn dev who insists on using Emacs on a Mac when everyone else is using Visual Studio.



I think the benefits are when you already know the extent of your work and you can build tools to steamline you workflows. Now having something that you love working with instead of making you frustrated every day is very nice.


This has been my experience as well. The most productive people are the ones who actually focus on the work instead of wasting time configuring a perfect editor.


VSCode is a significantly more pleasurable experience working over a 100ms+ network connection than either vim or emacs (this being the reason why I switched away from emacs/tramp myself).


If you haven’t already, and I know this doesn’t hold up for GUI emacs or vim, but consider running them through https://mosh.org/


Mosh is definitely an improvement over ssh, especially for connections with lag spikes (and I use it for terminal sessions). But it's no match for VSCode.


the ide is not more complicated than your customized Vim setup once you get it close to functionality (you won't). I use all keybindings anyway, so it's not like the UI adds anything bad?

I switched from lightweight editors to IDEs many years ago and my productivity went up A BUNCH - even if sometimes it uses gigabytes of ram. so what? Even my old used machines have 8-16gb of memory now.

I would honestly much rather hire people that work in IDEs than people that like to "hack" on their vi/vim/emacs setup. The number of times I've been in a screen share with someone while they're trying to code or debug something with Vim etc, it just feels so slow to watch them work that I get that embarrassed-for-them feeling.



> that attract the best talent

I've seen hugely talented folk on vim/emacs/emacs+evil, and on VSCode/JB. I think was the latter tools do, it make some of the advantages of being proficient in vim/emacs/regex available with less learning curve.

Currently there are some combinations that simply best-in-class: VSCode+TS, JetBrainsIDEA+Java/Kotlin, VisualStudio+MicrosoftGuiStuff. vim/emacs may come a long way in these areas, but cannot beat the integration level offered in these combinations.

Also, you mention "proprietary", but JetBrainsIDEA and VSC are opensource to some extend, which does improve community imho. But, the fact that they are less "open access innovation" projects, and more company owned is clear to everyone.

Finally: AI will come to software devt, and I wonder if AI tools will ever be available on true open access innovated IDEs.



> I've seen hugely talented folk on vim/emacs/emacs+evil, and on VSCode/JB. I think was the latter tools do, it make some of the advantages of being proficient in vim/emacs/regex available with less learning curve.

Take Reddit and Hacker News as a fitting analogy, a community with a higher barrier to entry/more niche will be smaller, but the quality is vastly improved. There's still going to be people who sit in both communities, and smart people in both, but it's not controversial to say that an initial learning curve tends to attract people who can pass the learning curve and are motivated to do so. Another great example is the linux kernel development process.

> Currently there are some combinations that simply best-in-class: VSCode+TS, JetBrainsIDEA+Java/Kotlin, VisualStudio+MicrosoftGuiStuff. vim/emacs may come a long way in these areas, but cannot beat the integration level offered in these combinations.

Integration in some ways, in other ways a terminal based tool that adheres to the unix philosophy is more integrated with thousands of tools than an IDE where every tool has to be converted into a bespoke series of menu items. Just look at fzf, git, rg, etc. integrations in Vim. They are only lightly wrapped and so the full power of the tool shines through, and it's easy to customize it to your specific needs, or add more tools.

> Finally: AI will come to software devt, and I wonder if AI tools will ever be available on true open access innovated IDEs.

In the same vein, AI tools that act as black boxes and are integrated in the same transparent way as git or rg in Vim at least allow the editor to remain full transparent to the end user, and leave the complexity in the LSP or bespoke tool. I really see no difference between how AI tools will relate to editing as LSPs do today.



> Take Reddit and Hacker News as a fitting analogy

In so many ways they are not, but I see why you come to this conclusion. Some overlap in users.

To me opensource is "common good" stuff, HN and Reddit are "us playing on some one else's computer+software".

All options have integrations, gits, fzf's, etc. And AI is not just "another black box", it's going to save you a lot of typing very soon. This is good: more time for thinking and crafting; less time for boilerplate-y stuff.



> VSCode+TS, JetBrainsIDEA+Java/Kotlin, VisualStudio+MicrosoftGuiStuff

Do any of these finally have something remotely as good as Magit? Or a good email client?



I found JetBrains' Git better in many ways than my console flow. Tried, but never got into Magit, as I moved on from Emacs.


Er.... "VSCode+TS" ... wat?

ITT: people who have not used tools they're talking about with confidence.

Everything available in VSCode is available in (neo)vim, without a slow buggy UI, modals, misfocused elements, and crashes.

All the LSPs used by vscode are easily available, including copiolt, full intellisense, and full LSP-backed code refactors/formats/etc.



I use VSCode every day and cant remember the last time it crashed or the UI glitched.


The file explorer constantly glitches -- it never knows where the focus is supposed to be so adding/moving/deleting/etc. files ends up selecting the wrong ones.

Most plugins to add barely any IDE-like functionality grind the whole thing to a hault.

Whether you're on insert mode or replace on autocompletes is random, and changed by plugins.

The list goes on.

VSCode is an extremely poor quality piece of desktop software hacked together with web tech. It's an amazing plugin for a website.



VS Code has been crashing at launch in Wayland since more than eight months ago:

https://github.com/electron/electron/issues/37531



Just today I helped a coworker patch their /etc/bash.bashrc because VSC's bash integration was broken enough to not load bash-completion. Apparently, VSC would rather hijack bash's entire boot process (via the --init-file flag) and then simulate, obviously poorly, bash's internal loading process, instead of just sourcing a file into bash after it loads.


Neovim has been hugely problematic for me as an IDE (lots of plugins). Lots of errors related to OS dependencies I need to manually install and keep up to date.


Visual Studio Code is currently where all of the tooling effort is focused. For the best tools with the best integration you should be using it or, depending on language, JetBrains. These are easy to use and developers can be productive in them from the word go -- without extensive customization. Hell, if you open a file in a new programming language, VSCode will suggest plugins to install.

You do NOT want to build integrations by yourself. You want to build whatever product you're building. The fact that vim and emacs users do this is yak-shaving that pulls precious time away from the task at hand and serves as a distraction for editor fetishists. Do not become an editor fetishist. Modern IDEs help you become more productive faster and give you more support for things like debugging. (You are using a debugger to inspect and analyze your code, right?)



>"Technologists who choose the propriety path of least resistance when it comes to their most important tools I think are ultimately missing out in a lot of ways"

Nope. Not missing Vim. Live and let live



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



Search:
联系我们 contact @ memedata.com