AyaFlow:一种用Rust编写的、基于eBPF的高性能网络流量分析器。
AyaFlow: A high-performance, eBPF-based network traffic analyzer written in Rust

原始链接: https://github.com/DavidHavoc/ayaFlow

## Ayaflow:高性能网络流量分析 Ayaflow是一个基于Rust的网络流量分析器,利用eBPF实现内核原生可见性,开销极小。专为Kubernetes设计,它以无sidecar的DaemonSet形式运行——每个节点一个pod,无需特权sidecar。 它使用TC hooks捕获入站/出站流量,解析头部并将数据推送到环形缓冲区。一个使用Tokio和Axum构建的用户空间代理处理这些数据,维护实时连接状态(DashMap),将事件持久化到SQLite用于历史分析,并暴露带有Prometheus指标的REST API。 主要特性包括通过仪表盘进行实时监控,具有可配置保留期的持久历史记录,可选的深度L7检查(TLS SNI & DNS),以及用于API访问的IP白名单。Ayaflow需要Linux内核>= 5.8,支持BTF,并利用最小资源(~33MB RSS)。它构建在Aya eBPF框架之上,为全节点网络可观察性提供全面的解决方案。

Hacker News 新闻 | 过去 | 评论 | 提问 | 展示 | 招聘 | 提交 登录 AyaFlow: 一个高性能的、基于 eBPF 的网络流量分析器,用 Rust 编写 (github.com/davidhavoc) 13 分,由 tanelpoder 发表于 2 小时前 | 隐藏 | 过去 | 收藏 | 讨论 帮助 指南 | 常见问题 | 列表 | API | 安全 | 法律 | 申请 YC | 联系 搜索:
相关文章

原文

A high-performance, eBPF-based network traffic analyzer written in Rust. Designed to run as a sidecarless DaemonSet in Kubernetes, providing kernel-native visibility into node-wide network traffic with minimal overhead.

Built on the Aya eBPF framework.

Kernel:  NIC --> TC Hook (eBPF, ingress + egress) --> RingBuf
                                                        |
Userspace:                            Tokio Event Loop
                                     /       |       \
                              DashMap    SQLite     Axum HTTP
                            (live stats) (history)  (API + /metrics)
  • Kernel-side: A TC (Traffic Control) classifier attached at both ingress and egress parses Ethernet/IPv4/TCP/UDP headers and pushes lightweight PacketEvent structs (with a direction tag) to a shared ring buffer.
  • Userspace: An async Tokio agent polls the ring buffer, maintains live connection state in a DashMap, persists events to SQLite, and exposes a REST API with Prometheus metrics.
  • eBPF-native capture -- No libpcap, no privileged sidecar. Hooks directly into the kernel's traffic control subsystem.
  • Sidecarless DaemonSet -- One pod per node instead of one per application pod.
  • Real-time monitoring -- Live dashboard via REST API + WebSocket streaming.
  • Persistent history -- SQLite storage with configurable data retention and aggregation.
  • Deep L7 inspection -- Optional TLS SNI and DNS query extraction for domain-level visibility into encrypted traffic.
  • Prometheus /metrics -- Native exporter for ayaflow_packets_total, ayaflow_bytes_total, ayaflow_active_connections, ayaflow_domains_resolved_total, ayaflow_deep_inspect_packets_total.
  • IP allowlist -- Restrict API/dashboard access by source CIDR.
  • Rust: Stable + nightly toolchain
  • bpf-linker: cargo +nightly install bpf-linker
  • Linux kernel: >= 5.8 with BTF support (for eBPF)
  • Capabilities: CAP_BPF, CAP_NET_ADMIN, CAP_PERFMON
# Install bpf-linker (one-time)
cargo +nightly install bpf-linker

# Build everything (eBPF + userspace)
cargo xtask build
# Requires root for eBPF attachment
sudo ./target/debug/ayaflow --interface eth0
curl http://localhost:3000/api/health
curl http://localhost:3000/metrics
Flag Description Default
-i, --interface Network interface to attach eBPF on eth0
-p, --port API server port 3000
--db-path SQLite database path traffic.db
--connection-timeout Stale connection cleanup (seconds) 60
--data-retention Auto-delete packets older than (seconds) disabled
--aggregation-window Aggregate events per window (seconds) 0 (off)
--allowed-ips CIDR(s) allowed to access the API unrestricted
-c, --config Path to YAML config file -
-q, --quiet Suppress non-error logs false
--deep-inspect Enable DNS + TLS SNI domain extraction false

Deploy as a DaemonSet (see k8s/daemonset.yaml):

kubectl apply -f k8s/daemonset.yaml

The DaemonSet uses hostNetwork: true and mounts /sys/fs/bpf. Prometheus scrape annotations are included by default.

resources:
  requests:
    memory: "32Mi"
    cpu: "50m"
  limits:
    memory: "128Mi"
    cpu: "500m"
Endpoint Method Description
/api/health GET Health check with basic counters
/api/stats GET Uptime, throughput, connection counts
/api/live GET Top 50 active connections by packet count
/api/history?limit=N GET Recent packets from SQLite (max 1000)
/api/stream WS WebSocket push of stats every 1s
/metrics GET Prometheus text-format metrics
ayaflow-common/    # Shared types (no_std, used by both kernel and userspace)
ayaflow-ebpf/      # eBPF kernel program (TC classifier)
ayaflow/            # Userspace agent (Aya loader + Tokio + Axum)
xtask/              # Build orchestration (cargo xtask)
k8s/                # Kubernetes DaemonSet manifest

Measured on a minimal VM (Ubuntu 24.04, 2 vCPU, 2 GB RAM):

Metric Value
Userspace RSS (steady-state) ~33 MB
eBPF program (xlated) 784 B
eBPF program (JIT-compiled) 576 B
eBPF program memlock 4 KB
EVENTS ring buffer 256 KB
PAYLOAD_EVENTS ring buffer 256 KB (only used when --deep-inspect is on)
Ring buffer memlock ~270 KB (540 KB with deep inspect)
Memory growth over time None observed (stable RSS)

The eBPF classifier is verified loaded via bpftool:

$ sudo bpftool prog show name ayaflow
430: sched_cls  name ayaflow  tag 0dabf78b3d068075  gpl
     loaded_at 2026-02-16T16:38:12+0100  uid 0
     xlated 784B  jited 576B  memlock 4096B  map_ids 76
  • OS: Ubuntu 24.04 LTS (aarch64)
  • Kernel: 6.x with BTF support
  • Hardware: 2 vCPU, 2 GB RAM (Lima VM)
  • Rust: nightly toolchain + bpf-linker

This project is licensed under either of:

at your option.

The eBPF kernel components (ayaflow-ebpf) are licensed under GPL to ensure compatibility with the Linux kernel verifier.

联系我们 contact @ memedata.com