Rust CUDA 项目
Rust CUDA Project

原始链接: https://github.com/Rust-GPU/Rust-CUDA

Rust CUDA 项目旨在使 Rust 成为使用 CUDA 工具包进行 GPU 计算的一流语言。它提供工具将 Rust 代码编译成优化的 PTX 代码,绕过通常存在问题的 LLVM PTX 后端,以及与 CUDA 库交互的库。 该项目提供诸如 `rustc_codegen_nvvm`(通过 NVVM IR 将 Rust 转换为 PTX)、`cuda_std`(GPU 端实用程序)、`cudnn`(深度学习原语)、`cust`(CPU 端 CUDA 交互)、`gpu_rand`(GPU 友好的随机数生成)和 `optix`(硬件光线追踪)之类的板条箱。这些板条箱有助于使用 Rust 的安全特性和 RAII 原则编写高性能 GPU 内核并管理 CPU/GPU 接口。 这是一个重启的早期项目,因此可能会存在 bug 和潜在的安全问题,但鼓励贡献。它提供 `Dockerfile` 来构建容器环境,以及一个示例 `devcontainer.json` 来创建自定义设置。这允许任何人进行实验或贡献。

这个Hacker News帖子讨论了Rust-CUDA项目,该项目旨在使Rust代码能够在NVIDIA GPU上运行。虽然很有前景,但一些用户指出了其过去可用性方面的问题,并建议Cudarc作为一种维护更积极的替代方案。一个关键的区别在于Rust-CUDA旨在支持主机和GPU之间共享数据结构,而Cudarc则采用序列化方法,缺少此功能。Rust-GPU维护者证实了这一目标,并将重点从图形处理转向通用计算,并寻求贡献者。人们对CUDA的厂商锁定提出了担忧,一些人主张采用更通用的Rust到GPU编译方法。讨论涉及Vulkan和OpenCL等替代方案,但承认其生态系统不如CUDA完善。最终,该帖子突出了Rust用于GPU编程的潜力以及实现稳定、通用和广泛采用的解决方案的挑战。一些人建议NVIDIA需要更多地投资Rust的CUDA功能,才能使其像C++功能一样实用。

原文

An ecosystem of libraries and tools for writing and executing extremely fast GPU code fully in Rust

⚠️ The project is still in early development, expect bugs, safety issues, and things that don't work ⚠️

Important

This project is no longer dormant and is being rebooted. Please contribute!

The Rust CUDA Project is a project aimed at making Rust a tier-1 language for extremely fast GPU computing using the CUDA Toolkit. It provides tools for compiling Rust to extremely fast PTX code as well as libraries for using existing CUDA libraries with it.

Historically, general purpose high performance GPU computing has been done using the CUDA toolkit. The CUDA toolkit primarily provides a way to use Fortran/C/C++ code for GPU computing in tandem with CPU code with a single source. It also provides many libraries, tools, forums, and documentation to supplement the single-source CPU/GPU code.

CUDA is exclusively an NVIDIA-only toolkit. Many tools have been proposed for cross-platform GPU computing such as OpenCL, Vulkan Computing, and HIP. However, CUDA remains the most used toolkit for such tasks by far. This is why it is imperative to make Rust a viable option for use with the CUDA toolkit.

However, CUDA with Rust has been a historically very rocky road. The only viable option until now has been to use the LLVM PTX backend, however, the LLVM PTX backend does not always work and would generate invalid PTX for many common Rust operations, and in recent years it has been shown time and time again that a specialized solution is needed for Rust on the GPU with the advent of projects such as rust-gpu (for Rust -> SPIR-V).

Our hope is that with this project we can push the Rust GPU computing industry forward and make Rust an excellent language for such tasks. Rust offers plenty of benefits such as __restrict__ performance benefits for every kernel, An excellent module/crate system, delimiting of unsafe areas of CPU/GPU code with unsafe, high level wrappers to low level CUDA libraries, etc.

The scope of the Rust CUDA Project is quite broad, it spans the entirety of the CUDA ecosystem, with libraries and tools to make it usable using Rust. Therefore, the project contains many crates for all corners of the CUDA ecosystem.

The current line-up of libraries is the following:

  • rustc_codegen_nvvm Which is a rustc backend that targets NVVM IR (a subset of LLVM IR) for the libnvvm library.
    • Generates highly optimized PTX code which can be loaded by the CUDA Driver API to execute on the GPU.
    • For the near future it will be CUDA-only, but it may be used to target amdgpu in the future.
  • cuda_std for GPU-side functions and utilities, such as thread index queries, memory allocation, warp intrinsics, etc.
    • Not a low level library, provides many utility functions to make it easier to write cleaner and more reliable GPU kernels.
    • Closely tied to rustc_codegen_nvvm which exposes GPU features through it internally.
  • cudnn for a collection of GPU-accelerated primitives for deep neural networks.
  • cust for CPU-side CUDA features such as launching GPU kernels, GPU memory allocation, device queries, etc.
    • High level with features such as RAII and Rust Results that make it easier and cleaner to manage the interface to the GPU.
    • A high level wrapper for the CUDA Driver API, the lower level version of the more common CUDA Runtime API used from C++.
    • Provides much more fine grained control over things like kernel concurrency and module loading than the C++ Runtime API.
  • gpu_rand for GPU-friendly random number generation, currently only implements xoroshiro RNGs from rand_xoshiro.
  • optix for CPU-side hardware raytracing and denoising using the CUDA OptiX library.

In addition to many "glue" crates for things such as high level wrappers for certain smaller CUDA libraries.

Other projects related to using Rust on the GPU:

  • 2016: glassful Subset of Rust that compiles to GLSL.
  • 2017: inspirv-rust Experimental Rust MIR -> SPIR-V Compiler.
  • 2018: nvptx Rust to PTX compiler using the nvptx target for rustc (using the LLVM PTX backend).
  • 2020: accel Higher-level library that relied on the same mechanism that nvptx does.
  • 2020: rlsl Experimental Rust -> SPIR-V compiler (predecessor to rust-gpu)
  • 2020: rust-gpu rustc compiler backend to compile Rust to SPIR-V for use in shaders, similar mechanism as our project.
## setup your environment like:
### export OPTIX_ROOT=/opt/NVIDIA-OptiX-SDK-9.0.0-linux64-x86_64
### export OPTIX_ROOT_DIR=/opt/NVIDIA-OptiX-SDK-9.0.0-linux64-x86_64

## build proj
cargo build

Use Rust-CUDA in Container Environments

The distribution related Dockerfile are located in container folder. Taking ubuntu 24.04 as an example, run the following command in repository root:

docker build -f ./container/ubuntu24/Dockerfile -t rust-cuda-ubuntu24 .
docker run --rm --runtime=nvidia --gpus all -it rust-cuda-ubuntu24

A sample .devcontainer.json file is also included, configured for Ubuntu 24.02. Copy this to .devcontainer/devcontainer.json to make additonal customizations.

Licensed under either of

at your discretion.

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

联系我们 contact @ memedata.com