PowerPC 仍然很强大 (在 G4 笔记本电脑上运行 Llama)
The PowerPC Has Still Got It (Llama on G4 Laptop)

原始链接: https://www.hackster.io/news/the-powerpc-has-still-got-it-c4348bd7a88c

## 老技术,新花样:在20年前的PowerBook上运行AI 如今,苹果的M系列芯片占据了头条,但该公司之前曾使用PowerPC处理器。 近期,复古计算爱好者Andrew Rossignol证明了这些旧芯片并未过时,他成功地在2005年的PowerBook G4上运行了一个大型语言模型(LLM)。 尽管这台机器存在局限性——1.5GHz处理器和仅1GB的内存——Rossignol将一个开源LLM推理引擎(llama2.c)和TinyStories模型适配到了PowerPC的大端架构。 这需要大量的软件修改和手动内存管理。 性能较慢,文本生成速度为每秒0.77个token,而现代Intel Xeon为6.91个。 然而,利用PowerPC的AltiVec向量处理扩展,将其提升到每秒0.88个token。 这项实验表明,通过巧妙的优化,即使是二十年前的硬件也能参与到现代AI中,推动了旧技术可能性的边界。

## PowerPC 与 LLM:复古计算的复兴 一篇最近的 Hackster.io 文章展示了在 PowerPC G4 笔记本电脑上运行 Llama 大型语言模型,这在 Hacker News 上引发了热烈讨论。发帖者和其他人透露,他们成功地在各种“奇特、老旧的系统”上运行了 LLM(包括 llama.cpp 和 qwen3.c),包括 SPARC、PA-RISC、RISC-V、Alpha、POWER 9 以及各种 x86/ARM 平台。 对话突出了旧硬件令人惊讶的能力,一位用户分享了一段 SGI 系统运行 LLM 的视频。讨论还涉及苹果公司通过与 IBM 和摩托罗拉的 AIM 联盟在 PowerPC 的创建中所扮演的角色,明确了他们尽管没有直接制造芯片,但参与了工程设计。 许多评论者表达了对旧款苹果 PowerPC 笔记本电脑(如 iBook G4)设计质量的怀旧之情,并分享了即使在今天将其用于日常任务的经验,通常依赖远程浏览器解决方案来克服网络兼容性问题。该帖子强调了人们对利用旧硬件进行现代应用(如人工智能)的兴趣日益增长,利用高效的矩阵运算。
相关文章

原文

For most people, the term “Apple silicon” brings to mind powerhouse processors like the M4 Max. Since Apple went through a lengthy Intel phase prior to the development of their M-series chips, it is often assumed that these are their first custom processors. But twenty years ago, Apple had different custom silicon in their computers — PowerPC microprocessors.

The advantages of these earlier chips were not as clear cut as the M-series chips. Diehard Apple fans swore that they were superior, while the PC crowd wouldn’t touch them with a ten-foot pole. But in any case, they are a couple decades old at this point, so they do not have a lot of gas left in the tank. However, Andrew Rossignol does not believe that the tank is empty just yet. Rossignol recently demonstrated that a PowerBook G4 from 2005 is capable of getting in on the action of running modern artificial intelligence (AI) algorithms — with some caveats, of course.

Process different

Rossignol, a vintage computing enthusiast, successfully ran a large language model (LLM) on a 1.5GHz PowerBook G4, a machine with just 1GB of RAM and a 32-bit processor. The experiment used a fork of llama2.c, an open-source LLM inference engine originally developed by Andrej Karpathy. Given the hardware constraints of the PowerBook, Rossignol chose the TinyStories model, a relatively small model with 110 million parameters that was designed specifically for generating simple short stories.

To make this work, Rossignol had to modify the original software to accommodate the PowerPC’s big-endian architecture, which differs from the little-endian format that most modern processors use. This involved converting model checkpoints and tokenizer data to the appropriate format, ensuring that numerical data was processed correctly. Additionally, the memory alignment requirements of the aging PowerPC chip meant that weights had to be copied into memory manually, rather than being memory-mapped as they would be on an x86 system.

Well, technically it works

Performance was, predictably, not so good. Running the model on an Intel Xeon Silver 4216 processor achieved a processing speed of 6.91 tokens per second. The same model on the PowerBook G4, however, managed just 0.77 tokens per second — taking a full four minutes to generate a short paragraph of text.

To improve performance, Rossignol leveraged AltiVec, the PowerPC’s vector processing extension. By rewriting the core matrix multiplication function using AltiVec’s single instruction, multiple data capabilities, he was able to increase inference speed to 0.88 tokens per second — a modest improvement, but you have to take what you can in a project like this.

Despite the slow performance, the fact that a 20-year-old laptop could successfully run a modern AI model at all is impressive. The PowerBook’s outdated architecture, limited RAM, and lack of specialized accelerators posed a number of challenges, but careful software optimizations and a deep understanding of the hardware allowed Rossignol to push the system well beyond its expected limits.

联系我们 contact @ memedata.com