Contributed by Peter N. M. Hansteen on from the queueing for Terabitia dept.
queue
rules in
pf.conf(5).
However, an internal 32-bit limitation in the HFSC
service curve structure (struct hfsc_sc) meant that bandwidth values
were silently capped at approximately 4.29 Gbps,
” the maximum value of a u_int ".
With 10G, 25G, and 100G
network interfaces now commonplace,
OpenBSD devs making huge progress unlocking the kernel for SMP,
and adding drivers for cards supporting some of these speeds,
this limitation started to get in the way.
Configuring bandwidth 10G on a queue would silently wrap around,
producing incorrect and unpredictable scheduling behaviour.
A new patch
widens the bandwidth fields in the kernel's HFSC scheduler
from 32-bit to 64-bit integers, removing this bottleneck entirely.
The diff also fixes a pre-existing display bug in
pftop(1)
where bandwidth values above 4 Gbps would be shown incorrectly.
For end users, the practical impact is: PF queue bandwidth configuration now works correctly for modern high-speed interfaces. The familiar syntax just does what you'd expect:
queue rootq on em0 bandwidth 10G queue defq parent rootq bandwidth 8G default
Values up to 999G are supported, more than enough for interfaces today and the future. Existing configurations using values below 4G continue to work - no changes are needed.
As always, testing of -current snapshots and
donations to the OpenBSD Foundation are encouraged.
The editors note that the thread titled
PF Queue bandwidth now 64bit for >4Gbps queues
on tech@ has the patch and a brief discussion with the
conclusion that the code is ready to commit by Friday, March 20th, 2026.