(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=39962023

本文描述了演讲者对技术发展的个人经历。 他一开始也不愿意戴手表,但后来发现 Apple Watch 在管理通知方面有巨大的用处。 他回忆起过去人们对计算器的兴奋,并将其与当前年轻一代对 Gameboy 的迷恋进行了比较。 作者进一步讨论了他使用 Palms 的经历,特别是 Vx 模型,以及随后智能手机的出现。 尽管他对年轻时在 Palm 设备上玩游戏有着美好的回忆,但他对 PalmOS 的消亡表示失望。 文本最后反思了技术的进步以及是否真正渴望返回旧系统,最终质疑改进是否超过潜在损失。

相关文章

原文


Nothing made me feel older than going to the Computer History Museum in Mountain View, CA and seeing a Palm Pilot in the display case.

It should be illegal to show things which were an integral part of your life, a short 30'ish years ago, as if they were uncovered in the ruins of some pre-civilization. Not fair at all.



I still have my original PalmPilot in a box in the attic. Its existence was a huge life lesson for me.

I asked my boss to pay for it (he did) but he said: do you use anything to organize your life and projects right now? If you don't, I don't think a PalmPilot will help you.

He was so right.



I find that often, having a tool that enhances something you never did doesn’t make you start doing that thing.

But there are exceptions. I never had an address book or calendar until I had a Palm Pilot. It might have just been that I was becoming an adult at the time, but part of it was probably a barrier to use factor. The Palm was a small thing I could carry to class, keep near my phone, bring to my internship job, etc. It and the need for organizing did conspire to make it my first real organizer and my first time having that information organized at all.



The key difference for me was that I didn't always have my notebook/agenda on me/at hand, so writing things was haphazard and unreliable (either I tried to remember and failed or I noted it down but consistently lost the random piece of paper I used as a fallback) whereas when I got a Palm device I always had that device at hand.

It all boiled down to the physical paper tradeoff: small filled up quickly/was too constraining a space, bigger was impractical to lug around anywhere.

So arguably I had sort of a broken system in place already but it became very intermittent to the point of being nonexistent in practice because of the constraints. Palm devices allowed me to fully realise the system.



Totally agree about there being exceptions. I never wore a watch, but when my wife got me an Apple Watch it became a huge utility to me to filter notifications to see only important things I cared about and it made me more productive to have one.


I was the opposite. I used it as a calculator, without adding any ebooks on top, but my professors were uneasy about it. Then I got a calculator to curb their anxiety.


I remember when calculators were the Forbidden Fruit because, according to maths teachers, we were "not going to be walking around with calculators in our pockets all the time."


For simple stuff, incl. calculus, I agree with them. This was for other courses, which prioritize methodical correctness rather than math knowledge, like physics and chemistry.


This one's funny because on one hand you have young people finding their grandparents Game Boys in the attic but on the other you'll have kids of the same age recording YouTube videos about GB modding, because that scene is still huge, diverse and very lively.


As someone who had a gameboy growing up, and now a child, I had to quickly do some math to console myself that I'm not yet "Grandfather" age. This is simply a (now) old man who had fun toys as a (normal age) adult.


But the (horrifying) maths checks out: say he got the Gameboy at the age of 12 when it launched in 1989, had a first child at twenty in 1997, that child has a child of its own also at the age of twenty in 2017, grandchild aged six or seven now excited to find granddad's Gameboy. Twenty is merely youngish for a first child, not the stuff of shotgun-wedding backblocks.


When I was 17 I had a job at Software Etc (a forerunner of Gamestop), and person checking out asked if I had any kids. I was kind of bewildered and said "I think I'm a little young for that" and she sort of shakes her finger in front of me, "oh no you're not honey"

I've been thinking about that ever since, marking various dates in my life where I could take this theoretical kid out for a beer, etc. If my child's child had made a similar… um… life decision… I could be a great-grandfather in two years.



I'm similarly haunted by the question of an avuncular Anglo-Indian office manager some thirty years ago who upon learning that I had no kids exclaimed "How do you know that you're not just shooting blanks?!" I've wondered ever since how is have explained to that putative first-born that really they were just the pipe-cleaning debugging trial run, just making sure that the baby-batter cannon didn't need a rebore before settling down to a proper production run. (I'm sure that he'd have regarded my eventual brood of two as being confirmation of my lack of earnestness...a dilettante of the dong department)


The Computer History Museum also has a Dreamcast on display and that bothered me way more because there is some unresolved traumatized part of my brain left from when I was a teen still actively waiting for Dreamcast to make it's big comeback


At the Science Museum in London, there is a collection of mobiles phones, computers and consoles in one gallery. My partner and I take great joy in pointing out all the ones we've owned over the years, it's great fun to see them again.


We had an org offsite there about 5y ago. The VP afterwards asks everyone what they most fondly remember.

One of the younger seniors says “I remember when my Dad used a palm pilot!” The room had the same experience.



I still have a section on my homepage about how to create and flash custom ROMs for the Palm Vx - and somewhat surprisingly, I still now and then get emails from people asking for help with that.


Now imagine what I feel when seeing paper tapes, 8" floppies, assemble kits for Z80 in such museums.

On the other hand, there some special sense of nostalagia for what I was doing back on those days.



the Museum of Science and Technology (MOST) in Syracuse has a display regarding the history and tech of cell phones. Our children were also incredulous that the nokia brick was what passed as our first cell phones.


I still have my Visor with its VisorPhone module, the very first touch-only smartphone in existence, 5 years before the iPhone :) It was big, clunkly and relatively impractical (the Visor ran on 2 AA batteries, while the phone module had its own rechargeable battery) but the complete integration with Palm Desktop (later jPilot on Linux) was a breeze.


This PumpkinOS project is pretty incredible. I can't imagine how much effort it would take to be compatible with all the system calls that the average Palm app would expect. I remember Palm did some truly weird things with memory: anything moderately large would need to be put into a special memory block that the OS could rearrange at will, and one would need to lock the block's handle to keep it stable while accessing it. Stuff like that must have been challenging (and fun) to implement in PumpkinOS.

This brings me back. I used to make little games for Palm OS, and I was so excited for the next version of the OS which would let one use the (then new) Palm OS Development Suite to make programs. It was also the last OS I've used where an app had a central event loop. Everything else today has UI frameworks that handle it for you. Things are easier now, but I still miss it.



> It was also the last OS I've used where an app had a central event loop.

Windows is still like that if you use Win32 APIs directly.

All GUI toolkits ever made are like that, but in most of the modern ones, this queue and loop are usually internal and you can only infer their existence by looking at the stack in a debugger or when something crashes.



Windows doesn't actually have a central event loop, which makes it pretty unique.

macOS/iOS/etc have a central event loop in Cocoa - what more only initial thread is allowed to talk with windowserver!

Xlib pretty much enforced single event loop per connection - XCB allowed more.

In comparison, win32 applications can create an event loop ("message pump") per thread, and you can use GUI calls completely independently on them.



I recall there was an Ada X11 server (Mitre?) that used tasks instead of a single event loop, although they may have been papering over an event loop underneath.


X11 server side is a bit different case - generally you control all message flows there in custom way.

The windows message loop is essentially something you can create per thread, and "GUI" programs have one initialized for them (thus WinMain instead of main in C).

But you need no special work to create more threads with more loops, in fact making every top-level window a separate thread is trivial (doing it for sub-windows like widgets might be slightly more complex though).



> Windows is still like that if you use Win32 APIs directly.

Which is basically the only option for C and C++ developers, when using vanilla Visual Studio, unless they want to write libraries to be consumed by .NET instead, or use a third party framework.

It is either raw Win32 or MFC, don't even bother with WinUI.



> anything moderately large would need to be put into a special memory block that the OS could rearrange at will, and one would need to lock the block's handle to keep it stable while accessing it

Didn't 16-bit Windows and classic Mac OS do something similar? If you're doing multitasking on a system without an MMU then I think that kind of live heap defragmentation would have been practically required.



More saliently than that, Palm started out as a vendor of Newton apps, before it started making its own Newton-killer hardware.

Palm's Graffiti started out as an alternate text input system for the Newton. It was an Apple software vendor long before it was an Apple rival, and its design is influenced by the Newton more proximally than the Mac.



Well - it is even more than that. They basically used Apple style code resources to define PalmOS apps. You used to be able to compile Think Pascal code on a Mac, and some guy has worked out how to use that code resource to convert it in to a PalmOS app with just tweaking the code resources in the compiled MacOS code. It was quite mind bending to me as a 20-something PalmOS fanboy with a day job doing Delphi. This was in like, 1998/1999 or so. I even went as far as emulating MacOS just to play with it. I don;t know if his code still exists online, but the tool was called SARC (Swiss Army Resource Compiler) is anyone cares to search for it.

I don't think Palm did a lot to change the exe format in the early days. And they used the same CodeWarrior 68K compiler that also targeted MacOS at the time.



> Didn't 16-bit Windows and classic Mac OS do something similar?

I assume this is what `{Local,Global}{Lock,Unlock}` were for when combined with `{Local,Global}Alloc({L,G}MEM_MOVEABLE)`

Similar idioms occasionally persist in modern code - e.g. when dealing with FFI in GCed languages (C#'s `fixed` statement pins memory in place.)



Yes. The idea wasn't to get away with not having an MMU, though - it was to get away with shipping the Mac with an ungodly low amount of RAM for a machine with a GUI. I believe the original idea was to ship with like 64k or something?

Obviously, with the state of mobile hardware back then relocatable blocks were also similarly necessary in order to save RAM.

For anyone wondering, no, this isn't the thing that made classic Mac OS unfit for multitasking. The MMU is necessary to keep applications from writing other apps' heaps, not to do memory defragmentation. You can do some cool software-transparent defragmentation tricks with MMUs, but if you're deciding what the ABI looks like ahead of time, then you can just make everyone carry double-pointers to everything.



Well, there's also the fact that the MC68000 in the original Mac didn't have an MMU, and it was difficult to add an external MMU to a 68000 system [1]. You could use an MMU sanely starting with the MC68010, and it wasn't until I think the MC68030 that the CPU came with an integrated MMU.

[1] Because exceptions on the 68000 didn't save enough information to restart the faulting instruction. You could get around this, but it involved using two 68000s as insane hack ...



Just to muddy the waters some more there was also an EC variant¹ of the 030 without the MMU.

The EC variant was available right through to the 060, and I'd be curious to know how prevalent the line was. I suspect the EC versions far outnumbered the "full" chips, because they appeared in all kinds of industrial systems. I'm basing that entirely on working for a company that was still shipping products with MMU-less 68k and coldfire this century, not any real data.

¹ https://en.wikipedia.org/wiki/Motorola_68030#Variants



And there's more mud to be found! The 'EC' version of the 68040 & 68060 was "no MMU, no FPU", and there was an 'LC' variant of the 68040 & 68060 that was "MMU, no FPU".

There were huge numbers of embedded 68k family chips shipped, although I've never seen the actual numbers. Folks went from 68000 to 68ec020 to 68ec060 as a (sorta) easy upgrade path. They're still made if you count the 68sec000, and the 68300 line is the spiritual successor.



You could use an MMU sanely starting with the MC68010

Whether the Motorola MMU for the 68010 (the 68451) was sane or not is a matter of some debate. The 68451 was definitely slow and limited (segments, not pages); in the end, most vendors rolled their own MMU out of static RAM and PALs.



I believe the original idea was to ship with like 64k or something?

I believe the original original idea was 64K and a 6809 instead of the 68000. Quickly became clear that wasn't going to live up to expectations.



> For anyone wondering, no, this isn't the thing that made classic Mac OS unfit for multitasking

Yeah, the way to port classic MacOS apps to native OS X apps was called Carbon, and it was basically 80% of the classic MacOS toolbox just ported to OS X, Handles and QuickDraw and all. Classic MacOS apps written to CarbonLib would "just run" natively in OS X (and the same binary in Classic MacOS). Carbon even kept working on Intel MacOS, but they finally killed it with the 32-bit deprecation a year or two before Apple Silicon was released.

Apple could have worked in multitasking in classic MacOS if they really wanted to, but their management was totally dysfunctional in the 90's where there was no point seen in investing in boring old MacOS since there was always a revolution just around the corner in the form of Pink, Taligent, Copland etc, projects which due to the aforementioned management never went anywhere.



> Apple could have worked in multitasking in classic MacOS if they really wanted to

They did, and they couldn’t. Most users had some code running that patched system calls locally or globally or that peeked into various system data structures, and all applications assumed the system used cooperative multitasking. Going from there to a system with preemptive multitasking would mean breaking a lot of code, or a Herculean effort to (try to) hack around all issues that it caused with existing applications. I think that would have slowed down the system so much that it wasn’t worthwhile making the effort.

Having said that, MacOS 9 had a preemptive multitasking kernel. It ran all ‘normal’ Mac applications cooperatively in a single address space, though. Applications could run ’tasks’ preemptively, but those tasks couldn’t do GUI stuff (https://developer.apple.com/library/archive/documentation/Ca...)



Microsoft actually sort of did that with Windows 9x.

There was a lot of Windows 3.1 and DOS software and drivers that they wanted to remain compatible, and those relied on DOS quirks. So, they had a copy (copies) of DOS always resident in RAM, mostly unused. All DOS syscalls (interrupts) were hooked so that they would call Windows instead. If a program added its own hooks, Windows would detect that and switch to 16-bit mode for the relevant operation. When the custom hook completed, it would call the next hook in the chain, which was the one that went back to Windows. Of course, this relied on Windows understanding DOS's internal data structures and keeping them in sync with what Windows was doing. A similar technique was used for drivers, if Windows found a driver it didn't understand, it would let that driver run in 16-bit mode.

Raymond Chen (of Microsoft fame) explains this much better than I ever could https://devblogs.microsoft.com/oldnewthing/20071224-00/?p=24...



> classic MacOS toolbox just ported to OS X

There was a company that re-implemented the macos toolbox on various Unixes. I hope somebody on HN worked there and can fill in more details.

Tldr; from [0]

The company was orignally Quorum Software Systems, Inc. which transformed into -> Latitude Group in 1994, and which got bought by Metrowerks in 1996-7?.

The original product was called Equal, and allowed Microsoft Word 5.1a and Excel 4.0 (the macos versions) to run on UNIX, with native [Motif?] look and feel and good performance without source code.

they later made a library called 'Latitude' so macos App developers could easily port their apps to Unix, which was how the Adobe Apps - Photoshop / Illustrator etc. got made available for Unix. .. and Latitude also apparently implemented a lot of QuickDraw.

"At the heart of Latitude is our own Portable Toolbox Implementation Layer. This layer is completely platform independent. It presents the Mac Toolbox API to the application, answers these calls through a trap table mechanism, and relies on other toolbox calls within the layer whenever possible. When a native system facility is needed, such as the display of a window or control, or some graphical rendering, this layer calls out to one of Latitude's platform dependent modules through an internal, well defined API. The toolbox layer doesn't know what kind of system lies underneath, only that calling this function will display a window or that function will draw a line, etc.

.. Because we've mapped native system facilities to Mac calls, the running application is an equal citizen on the desktop. The application's windows, menus, and control items are native system objects. Cutting and pasting between apps is facilitated by the native system's clipboard mechanism. Fonts come from the system font server -- including the default system font, which means the dialogs come up in something other than Chicago! Application windows are native windows -- not some rendering of a window inside of another. The performance hit is minimal. Latitude is merely mapping the Mac calls to the native system. There is very little processing going on in between. "

[0] http://preserve.mactech.com/articles/mactech/Vol.13/13.06/Ju...



Classic MacOS did, but it's definitely not something needed for multitasking without an MMU. For instance AmigaOS didn't do this, but instead effectively had a single shared heap.


Mac OS, Win16, PalmOS all have shared heaps too. This is precisely why you need defragmentation (after an application quits, the heap is a fragmented mess, full of holes) and therefore some system so that the other applications keep "movable handles" to heap blocks instead of raw pointers (which would become invalid after the heap undergoes one round of defragmentatino).

If an OS does not do this you are basically indirectly setting a limit to its uptime, as eventually this global heap's fragmentation will prevent launching any new programs.

Having local heaps does not solve this, as you still have to allocate these local heaps from somewhere. Having an MMU will allow you to do transparent defragmentation without handles as raw pointers (virtual addresses) become your handles. Having an MMU with fixed page size will allow you to outright avoid the need for defragmentation.



> Mac OS, Win16, PalmOS all have shared heaps too

Mac OS didn’t. It had a system heap and a heap for the running application. Once it supported running multiple applications simultaneously, each of them had its own heap (https://www.folklore.org/Switcher.html: “One fundamental decision was whether or not to load all of the applications into a single heap, which would make optimal use of memory by minimizing fragmentation, or to allocate separate "heap zones" for each application. I decided to opt for separate heap zones to better isolate the applications, but I wasn't sure that was right.”)

That’s why MultiFinder had to know how much RAM to give to each application. https://en.wikipedia.org/wiki/MultiFinder#MultiFinder: “MultiFinder also provides a way for applications to supply their memory requirements ahead of time, so that MultiFinder can allocate a chunk of RAM to each according to need” (Wikipedia doesn’t mention it, but MultiFinder also allowed users to increase those settings)



Very carefully.

It's in fact one of the biggest issues with AmigaOS that made it incredibly hard to add proper MMU support. The OS is heavily message-passing based, and it's not at all always clear "from the outside" who the owner of a given structure passed via a message port (which is little more than a linked list) is, and so the OS doesn't even know which task (process/thread - the distinction was pretty meaningless due to the lack of memory protection) owns a given piece of memory.

Later versions added some (optional) resource tracking to make it easier to ensure resources are freed, but if an application crashed or was buggy you'd frequently leak memory, and eventually have to reboot. It was not great, but usually less awful than it sounds with sufficiently defensive strategies.

[I have at various points when e.g. doing some work on AROS way back, argued that it is is quite likely possible to largely untangle this; partially because for a lot of cases, the ownership changes are clear and rules that fit actual uses can be determined; partially because the set of extant AmigaOS apps is small enough you could "just" add some new calls that does ownership tracking, declare the old ones legacy, and map ownership changes for the rest one by one and either patch them, or, say, add a data file for the OS to use to apply heuristics; had the remaining userbase been larger maybe it'd have been worth it]



That situation doesn't prevent an MMU and virtual memory. It prevents multiple address spaces. Multiple address spaces per process are not a requirement for virtual memory, as such. They are a requirement for getting some of the protection benefits of virtual memory. Not all the benefits. With a single address space for all applications, there can still be user/kernel protection: userland not being able to trash kernel data structures. (Of course with important system functions residing in various daemons, when those processes get trashed, it's as good as the system being trashed.)


It doesn't "prevent" an MMU and virtual memory, you're right, but it does severely limits what you can do with it hence why I wrote "proper" MMU support. There are virtual memory solutions for AmigaOS, though rarely used. There are also limited MMU tools like Enforcer, but it was almost only used by developers. AmigaOS4 has some additional MMU use, and there has been work on trying to add some more protection elsewhere as well, but it is all fairly limited.

Specifically in terms of the comment I replied to, you categorically can not automatically free memory when a task (process/thread) ends in AmigaOS without applications-specific knowledge without risking causing crashes, because some memory "handoffs" are intentional.

> With a single address space for all applications, there can still be user/kernel protection: userland not being able to trash kernel data structures.

Yes, you could if the OS was designed for it, and it was done at a point where most of the application developers were still around to fix the inevitable breakage.

The problem with doing this in AmigaOS without significant API changes or auditing/patching of old code is that there is no clear delineation of ownership for a lot of things.

This includes memory in theory "owned" by the OS, that a lot of applications have historically expected to be able to at least read, and often also write to.

You also e.g. can't just redefine the "system calls" for manipulating lists and message queues to protect everything because those are also documented as ways to manipulate user-level structures - you can define your own message ports and expect them to have a specific memory layout.

More widely, it includes every message sent to or received from the OS, where there's no general rule of who owns which piece of the message sent/received. E.g. a message can - and will often - include pointers to other structures where inclusion in the message may or may not imply an ownership change or "permission" to follow pointers and poke around in internals.

To address this would mean defining lifecycle rules for every extant message type, and figuring out which applications breaks those assumptions and figuring out how to deal with them. It's not a small problem.



Windows 16 bit did, but it required a MMU anyway, at least since Windows 3, that was its big feature, 16 bit protected mode and a VM mode for running MS-DOS.


> Windows 16 bit did, but it required a MMU anyway, at least since Windows 3

Windows 3.0 supported three modes of operation: real mode (8086 minimum), standard mode (286 minimum), 386 Enhanced mode (386 minimum). Real mode was pretty limited, and a lot of apps could not fit in its rather limited memory, but it was not completely useless. I believe real mode Windows apps could use EMS, although I’m not sure if many actually did

In Windows 3.1, real mode was removed, and only standard and 386 Enhanced were supported. So, 3.1 was the first version to “require an MMU”, if by that you mean a 286 or higher



> I remember Palm did some truly weird things with memory: anything moderately large would need to be put into a special memory block that the OS could rearrange at will, and one would need to lock the block's handle to keep it stable while accessing it. Stuff like that must have been challenging (and fun) to implement in PumpkinOS.

That’s extremely easy on modern hardware with gigabytes of RAM (compared to 2 megabytes on the pal pilot III): just use malloc, never move memory around, and make locking and unlocking such blocks no-ops. If there is an OS call to determine lock state, you’ll have to store that somewhere, but that isn’t difficult, either.

It also isn’t hard to implement the way they did back then; it ‘just’ complicates using it.



One nice thing about modern hardware would be that you wouldn't exactly be memory constrained. You'd get to implement a complicated API with whatever large size chunk of memory you wanted, since 128 MB of ram or how ever much they came with is peanuts today.


> since 128 MB of ram or how ever much they came with is peanuts today.

The first Palm (Pilot 1000) had 128 kB. I think the biggest 68k Palm was the Palm Vx with 8MB. Towards the end of the (Intel) ARM Palms, they did have 128 MB models though.



I think only the latest Treo had 128MB - the last PDA (Lifedrive) had 64MB, the TX 32MB.

(One should remember though that there wasn't mass storage+RAM as we typically think of it - the memory of the Palm devices was storage and active memory in one. Battery-backup'ed until the very latest models. There wasn't a filesystem as such. So all this memory should be thought of as memory for applications, nor like storage in an Android device.)



.. should not be thought of.. was what I meant to write. The memory is neither app memory nor storage, but both at once. The once-Windows based PDAs also used a combined memory setup, but there one section of the memory was for running apps, the rest for storage, i.e. different from PalmOS.


My heart thumped faster when I read this headline. Please make it work on Android so I can 'replace' my daily driver and go back to a better time!


I still miss the calendar and contact apps from the Palm. I stuck with Palm up until the Centro - and haven't found contact/calendar apps I'm as happy with as the Palm versions yet. They're either missing some simple basic features, or the UI is needlessly complicated.


So hype to lose some hours playing Space Trader. I had a Palm Vx in middle school and I have some very fond memories of playing that game under my desk in class.


What would it take to get this on modern (or even last generation) phone hardware?

I bet we could do everything we want with tremendous simplicity and out of this world battery life... Probably would make a PinePhone feel like a Rolls Royce.



According to the README, it runs natively on ARM but it looks more like a program than an OS. So I'm sure it can be updated to run on Android or iOS if the GUI code is rewritten with their respective frameworks but making a bootable OS seems more difficult. The author wrote an article last year about making it a bootable OS by using a barebones x86 kernel and QEMU so I'm sure it could probably be repurposed for ARM devices. [1]

https://pmig96.wordpress.com/2023/02/24/pumpkinos-busybox-an...



I remember investing in Palm thinking that they'd eventually be the ones to build something like the iphone. Sadly, they didn't and when apple did that was it for them.


They did have the Treo line!

Arguably, what ultimately brought Palm down was their early success and the huge library of existing shareware and freeware tools:

They desperately needed to try something new (Palm OS was just showing its age as a single-threaded, in-RAM, non-virtual-memory-based OS), but couldn't, since it would have alienated long-time fans by stranding their existing software libraries.

They could never work their way through that chicken-and-egg problem (and all of the split ups (OS vs. hardware), forks/spin-offs (Handspring), and re-mergers didn't help either) until it was too late: Cobalt OS never saw any devices, and the Pre was an ambitious new start but would have had a tough time against the iPhone even if it would have launched earlier than that.



> They did have the Treo line!

Thank you for mentioning it. It makes me feel like I'm taking crazy pills when people talk about how Steve Jobs invented the smartphone. I had a series of Treos starting with the Treo 270: https://en.wikipedia.org/wiki/Treo_270

As somebody who carried a Palm for years, it was so amazing to suddenly have the internet in my pocket. It still is, really.



With the Centro it was pretty amazing when they dropped a maps application later on - as it didn't have GPS. In built up areas they managed pretty impressive accuracy just by cell tower triangulation.


I'm very split about the Centro. I used one for a while, but after years of Symbian phones, I just couldn't get over how poor the hardware and how dated the OS were in comparison, even though having all of my old Palm OS applications on my phone was great.

Nokia's N-series had GPS, Wi-Fi, and good cameras years before that and it was hard to lose all of that; Symbian had proper multitasking, persistent storage, and Unicode support; and between native Symbian and J2ME apps, the software ecosystem wasn't half bad either.

Aesthetically, I'll always be a fan of Palm OS, but I couldn't bring myself to actually use it as a daily driver due to all of these limitations.



> Steve Jobs invented the smartphone

Yeah, well people are misinformed. Unless there was something for the Newton that turned it into a phone, IBM was first to market in 1994 [1] The IBM Simon made calls, did data, had a paid 3rd party app, etc.

Besides Palm, Symbian (Mostly from Nokia, but some other companies made Symbian handsets), RIM's Blackberry, and Microsoft's Windows Mobile (with handsets from many OEMs) had established smartphones before Apple. Of course, the iPhone had much better sales, and changed the market in many ways, but the category was 12ish years old when Apple entered. Hardly inventing or first to market.

[1] https://en.m.wikipedia.org/wiki/IBM_Simon



And, in any case, Japan had tons of what should be considered early smartphones. I was astonished the first time I saw one (and this was obviously before the iPhone). For some reason the iPhone managed to kill off the local Japanese industry though. Japan youngsters (and not only youngsters) are very fashion-oriented, which had more to do with the change than anything else.


The problem was the UI on the Japanese smartphones was completely unusable. Just how to go back to the previous menu differed between the different functions. I was there, I migrated my mother-in-law from her “galakei” to an iPhone. She went from barely even being able to take a picture (there was a dialog after each photo asking where to save it!! Wtf) to instant messaging and playing Pokémon Go.


Speaking of WAP, I actually found Palm.net very interesting.

Unfortunately, the Palm VII was only available in the US, and having to dial up via serial cable or infrared using a mobile phone kind of ruined the spontaneous information lookup aspect of it.



Apple was able to manage this with a much bigger market and a lot more apps (when transitioning to OS X), so while it would be hard, I think Palm could have been able to do that as well.

But as you say, the company structure, market position and a lot more worked against them (same thing with Nokia and Symbian).



I think it was their lack of vertical integration that did it. there were too many pieces that every developer needed to do, that made it really hard to make apps for it. compare that to the app store where Apple just takes a 30% cut, which is steep, but they do things for that 30%. On the consumer end of things, the actually affordable data plan with att at launch, which is more vertical integration, rather than letting the carriers do their thing, is what did it, imo.


Samsung SPH-i300, 7 years before iphone 1, and iphone 1 didn't even have apps.

I loved that thing. Audibl.com player app, 3rd party apps to integrate phone dialer and contacts db, internet... 14.4k internet but internet! email, browser, ssh and irc clients, even a vnc client, thousands of random apps for every little thing like today, color grid icon home screen, touch screen, sdcard, in 2000 or 2001.



I'm going to blame it on Palm's flat refusal to move onto PalmOS 6. Every time a new device came out and it was still on PalmOS 5 the whole community was like "what the fuck are you doing?"


One big problem is that Palm split into PalmSource for software and Palm One for hardware. PalmSource went off to design Linux-based OS, got acquired, and disappeared. Palm One, renamed back to Palm One, made some early smartphones with PalmOS. But PalmOS was pretty obsolete by then, with 16-bit apps running on 32-bit OS.

Then Palm developed webOS, which made some weird hardware decisions and couldn't compete with iPhone and Android. But it could have been a contender, better than Blackberry and Nokia that didn't make jump to capacitive screens.

I think Palm's problem was being too late, but if they hadn't split and made PalmOS successor, they would have missed capacitive smartphones.



I had a Palm Pre, and Treo before that. One problem is that webOS needed resources, and Palm chose limited hardware.

I think webOS could have been the iPhone if Apple didn't exist. I think they could have taken the second spot from Android if had been earlier, open, and released conventional hardware.



I've heard blame partially put on carriers - they initially resisted even carrying the Treo line without putting limits on what Handspring could do with it.

Apple had the iPod - and, crucially, customers - could bring these customers to the carriers, and so they could dictate more.

https://www.youtube.com/watch?v=b9_Vh9h3Ohw (the part I'm referencing is about 20 minutes in).



I had the whole Handspring / Palm smartphone line : Visor + Visor Phone, then Treo 270, then Treo 600, then the Treo 650, and finally the Palm Pré.

They were no iPhone, but they were extremely efficient work tools.



I was a Sprint customer from the start of the Palm phone era. Up until 2 years ago I was using colored Treo’s for my phone. I loved that mechanical keyboard it was so nice to use. ( I have slightly deformed fingers that make it hard for touch screens.). Merger with T-M killed the radio part. So sad to see it go.


I think I used to work with you. You stated the Pre was the "iPhone killer" about six months after the iPhone came out, and Android phones appeared. You showed our deparment's first iPhone user the non-awesome stuff the Pre did by comparison, then stated that Palm's millions of users would show me.

They sure did.



Nope, I was the guy that had Palm Pilots, got my whole Architecture group Palm pilots and we used to beam messages at each other in meetings. I became good at graffiti writing, my printing today looks like it. The Palm phones was a natural extension of that.

My first one was the 180 model, with the flip front cover. I did a quick check, I had my first Palm Phone in 2002-3, iPhones came out in 2007.



Cloudpilot is amazing, one of the most sophisticated PWAs I'm aware of! I haven't heard of Vexed, but for me, bringing back Space Trader made me very happy.


Ah yes, I think I even bought that at some point (didn't realize there was a free build on F-Droid!), but part of the nostalgic appeal to me are the native Palm OS UI widgets and buzzer sounds :)

Compiling Space Trader for Palm OS (it was open source too) was actually something I've repeatedly tried in the early 2000s but unfortunately never managed to pull it off – CodeWarrior was way out of my pocket money budget at the time, and I couldn't figure out how to do it using gcc and PilRC.



I love this project existing and prolonging the life of all that software that would otherwise never have a chance to execute again, but is this about nostalgia or about a real need/desire for that software?

Given how much people wax lyrical about Palm, Blackberry, Psion, outside of niche applications (I know IMAX needs Palm emulators to run bits of that stack), do people genuinely yearn for going back to the old days?

I was looking around for a "modern" Psion 5, and spotted some of the team had come up with the Gemini PDA which looks a bit like one, but is Android based. Some reviewers definitely find this a drawback. You can imagine a new EPOC operating system with support for modern connectivity would be a slam dunk for them, but then I found myself thinking I'd mostly run Linux on it (if I could find a new one in stock anywhere with a UK keyboard).

But then, do we all really want that? There has been a lot of progress in the last 20+ years, and I can't help but feel if most people were offered PalmOS instead of Android or iOS as their daily driver it would be frustrating enough they'd hand it back within a week.

What's the killer feature that makes that statement false?



So palm OS was an old mobile operating system, introduced around 1996, coded in C++... later extended to support smartphones?

Can someone please explain more to me about this OS as it seems pretty interesting, and I have never heard of it.

联系我们 contact @ memedata.com