Xortran - 一种用 Fortran IV 实现的 PDP-11 神经网络,具有反向传播功能。
Xortran - A PDP-11 Neural Network With Backpropagation in Fortran IV

原始链接: https://github.com/dbrll/Xortran

XORTRAN 是一个 FORTRAN IV 实现的多层感知器 (MLP),设计用于在古老的 PDP-11/34A 计算机(通过 SIMH 模拟)上学习 XOR 问题。该项目展示了使用 20 世纪 70 年代技术实现基本神经网络的可行性。 该网络使用一个包含四个神经元的隐藏层和泄漏 ReLU 激活函数,通过反向传播和均方误差进行训练。主要特点包括 He 类初始化、学习率退火和 Tanh 输出层。 在 RT-11 上运行,该代码需要至少 32KB 的内存和一个 FP11 浮点处理器。训练 17 个参数在真实硬件(或受限的 SIMH 环境)上需要几分钟。输出显示损失随 epoch 降低,最终产生准确的 XOR 预测。 XORTRAN 既是一项复古计算练习,也是早期科学计算与现代机器学习之间的历史桥梁,以 MIT 许可证发布。

最近一篇Hacker News帖子突出显示了一个用Fortran IV实现的PDP-11神经网络,引发了关于神经网络和计算历史的讨论。用户们回忆起在类似硬件上运行神经网络的早期尝试,最初认为这不切实际——现在这种观点正在被重新审视。 PDP-11虽然按今天的标准来说不算强大(相当于turbo XT),但它可靠,它的浮点单元对于这些早期实验至关重要。最初的Fortran IV编译器以其堆栈机器架构而闻名,并且出乎意料地被FPU优化。作者澄清代码即使没有FPU也能运行,可能使用了软浮点模拟。 对话还涉及Fortran的演变、关于发布日期的玩笑性辩论,以及对“PDP”作为首字母缩写词可能指代PDP-11本身的 исторического контекста的提及。 许多评论者欣赏这个项目,因为它提供了一种理解神经网络基本原理而无需现代复杂性的方式。
相关文章

原文

XORTRAN is a multilayer perceptron (MLP) written in FORTRAN IV, compiled and executed under RT-11 on a PDP-11/34A (via SIMH simulator).

It learns the classic non-linear XOR problem using:

  • One hidden layer (4 neurons, leaky ReLU activation)
  • Backpropagation with mean squared error loss
  • He-like initialization (manual Gaussian via Box-Muller lite)
  • Learning rate annealing (0.5 → 0.1 → 0.01)
  • Tanh output

The code compiles with the DEC FORTRAN IV compiler (1974). Execution requires a system with at least 32 kilobytes of memory and an FP11 floating-point processor. The PDP-11/34A was chosen as it was the smallest and most affordable PDP-11 equipped with an FP11 floating-point processor in the 1970s.

The training of the 17 parameters should take less than a couple minutes on the real hardware. In SIMH, setting the throttle to 500K (set throttle 500K) will provide a more realistic execution speed.

The output shows the mean squared loss every 100 epochs, followed by the final predictions from the forward pass.

The network converges towards the expected XOR outputs after a few hundred epochs, gradually reducing the error until it accurately approximates the desired results.

.RUN XORTRN
   1  0.329960233835D+00
 100  0.195189856059D+00
 200  0.816064184115D-01
 300  0.654882376056D-02
 400  0.109833284544D-02
 500  0.928130032748D-03

0 0 GOT:0.008353 EXPECTED:0.
0 1 GOT:0.979327 EXPECTED:1.
1 0 GOT:0.947050 EXPECTED:1.
1 1 GOT:0.020147 EXPECTED:0.
STOP --
  • In SIMH, attach the RL1 drive (ATT RL1 xortran.rl1).

  • In RT-11 (I use the single job RT-11-SJ V5), assuming DL1: is assigned to DK::

.FORTRAN/LIST:XORTRN.LST XORTRN.FOR
.LINK XORTRN.OBJ,FORLIB
.RUN XORTRN

Or if you just want to run the binary:

This project demonstrates that a minimal FORTRAN IV environment from the 1970s was sufficient to implement a basic neural network with backpropagation.
It’s both a retro-computing curiosity and a small historical experiment bridging early scientific computing and modern machine learning.

© 2025 Damien Boureille

This code is released under the MIT License.
You are free to use, copy, modify, and redistribute it, provided that you credit the original author.

联系我们 contact @ memedata.com