(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=43654881

这个Hacker News帖子讨论了Rust-CUDA项目,该项目旨在使Rust代码能够在NVIDIA GPU上运行。虽然很有前景,但一些用户指出了其过去可用性方面的问题,并建议Cudarc作为一种维护更积极的替代方案。一个关键的区别在于Rust-CUDA旨在支持主机和GPU之间共享数据结构,而Cudarc则采用序列化方法,缺少此功能。Rust-GPU维护者证实了这一目标,并将重点从图形处理转向通用计算,并寻求贡献者。人们对CUDA的厂商锁定提出了担忧,一些人主张采用更通用的Rust到GPU编译方法。讨论涉及Vulkan和OpenCL等替代方案,但承认其生态系统不如CUDA完善。最终,该帖子突出了Rust用于GPU编程的潜力以及实现稳定、通用和广泛采用的解决方案的挑战。一些人建议NVIDIA需要更多地投资Rust的CUDA功能,才能使其像C++功能一样实用。

相关文章
  • Rust CUDA 项目 2025-04-11
  • (评论) 2023-12-10
  • (评论) 2023-12-20
  • (评论) 2025-03-30
  • (评论) 2024-06-07

  • 原文
    Hacker News new | past | comments | ask | show | jobs | submit login
    Rust CUDA Project (github.com/rust-gpu)
    96 points by sksxihve 7 hours ago | hide | past | favorite | 32 comments










    Summary, from someone who uses CUDA on rust in several projects (Computational chemistry and cosmology simulations):

      - This lib has been in an unusable and unmaintained state for years. I.e., to get it working, you need to use specific, several-years-old variants of both rustc, and CUDA.
      - It was recently rebooted. I haven't tried the Github branch, but there isn't a release yet. Has anyone verified if this is working on current Rustc and CUDA yet?
      - The Cudarc library (https://github.com/coreylowman/cudarc) is actively maintained, and works well. It does not, however, let you share host and device data structures; you will [de]serialize as a byte stream, using functions the lib provides. Works on any (within past few years at least) CUDA version and GPU.
    
    I highlight this as a trend I see in software libs, in Rust more than others: The projects that are promoted the most are often not the most practical or well-managed ones. It's not clear from the description, but maybe rust-CUDA intends to allow shared data structures between host and device? That would be nice.


    I’m a rust-GPU maintainer and can say that shared types on host and GPU are definitely intended. We’ve mostly been focused on graphics, but are shifting efforts to more general compute. There’s a lot of work though, and we all have day jobs - we’re looking for help. If you’re interested in helping you should say so at our GitHub.


    What is the intended distinguisher between this and WGPU for graphics? I didn't realize that was a goal; have seen it mostly discussed in context of CUDA. There doesn't have to be, but I'm curious, as the CUDA/GPGPU side of the ecosystem is less developed, while catching up to WGPU may be a tall order. From a skim of its main page, it seems like it may also focus on writing shaders in rust.

    Tangent; What is the intended distinguishes between Rust-CUDA, and Cudarc? Rust shaders with shared data structures I'm guessing is the big one. That would be great! There of course doesn't have to be. More tools to choose from, and that encourages progress from each other.



    We observed the same thing here at Copper Robotics where we absolutely need to have good Cuda bindings for our customers and in general the lack thereof has been holding back Rust in robotics for years. Finally with cudarc we have some hope for a stable project that keeps up with the ecosystem. The last interesting question at that point is why Nvidia is not investing in the rust ecosystem?


    I was talking to one person from the CUDA Core Compute Libraries team. They hinted that in the next 5 years, NVIDIA could support Rust as a language to program CUDA GPUs.

    I also read a comment on a post on r/Rust that Rust’s safe nature makes it hard to use it to program GPUs. Don’t know the specifics.

    Let’s see how it happens!



    They kind of are, but not in CUDA directly.

    https://github.com/ai-dynamo/dynamo

    > NVIDIA Dynamo is a high-throughput low-latency inference framework designed for serving generative AI and reasoning models in multi-node distributed environments.

    > Built in Rust for performance and in Python for extensibility,

    Says right there where they see Rust currently.



    Damn. I transfered ownership over the cudnn and cudnn-sys crates (they are by now almost 10 year old crates that I'm certain nobody ever managed to use them for anything useful) to the maintainers a few years back as it looked to be on a good trajectory, but it seems like they never managed to actually release the crates. Hope that the reboot pulls through!


    I think that's true in most newer languages, there's always a rush of libraries once a language starts to get popular, for example Go has lots http client libraries even though it also has an http library in the standard library.

    relevant xkcd, https://xkcd.com/927/



    I think this also was in small part due to them (Rob Pike perhaps? Or Brad) live-streaming them creating an http server back in the early days and it was good tutorial fodder.


    I’ve been using the cudarc crate professionally for a while to write and call cuda from rust. Can highly recommend. You don’t have to use super old rustc versions. Although I haven’t looked exactly what you do need to use recently.


    Works on any recent rust and Cuda version. The maintainer historically adds support for new GPU series and Cuda versions fast.


    Very cool to see this project get rebooted. I'm hoping it will have the critical mass needed to actually take off. Writing CUDA kernels in C++ is a pain.

    In theory, since the NVVM IR is based on LLVM IR, rust in CUDA should be quite doable. In practice, though, of course it is an extreme amount of work.



    Unless NVIDIA actually embraces this, it will never be better than the C++, alone given the whole IDE integration, graphical debugging and libraries ecosystem.

    Unless one is prepared to do lots of yak shaving, and who knows, then NVIDIA will actually pay attention, like it has happened with CUDA support for other ecosystems.



    Shouldn't it be called RUDA?


    Looks like a dead end. Why CUDA? There should be some way to use Rust for GPU programming in general fashion, without being tied to Nvidia.


    There's no cross-vendor API which exposes the full power of the hardware. For example, you can use Vulkan to do compute on the GPU, but it doesn't expose all of the features that CUDA exposes, and you need to do the legwork yourself reimplementing all of the well optimized libraries (like e.g. cublas or cudnn) that you get for free with CUDA.


    Make a compiler that takes Rust and compiles into some IR, then another compiler that compiles that IR into GPU machine code. Then it can work and that's going to be your API (what you developed in Rust).

    That's the whole point of what's missing. Not some wrapper around CUDA.



    Because others so far have failed to deliver anything worthwhile using, with the same tooling ecosystem as CUDA.


    While I agree, that CUDA is the best in class API for GPU programming, OpenCL, Vulkan compute shaders and Sycl are alternatives that are usable. I'm for example, using compute shaders for writing GPGPU algorithms that work on Mac, AMD, Intel and Nvidia. It works ok. The debugging experience and ecosystem sucks compared to CUDA, but being able to run the algorithms across platforms is a huge advantage over CUDA.


    How are you writing compute shaders that work on all platforms, including Mac? Are you just writing Vulkan and relying on MoltenVK?

    AFAIK, the only solution that actually works on all major platforms without additional compatibility layers today is OpenCL 1.2 - which also happens to be officially deprecated on MacOS, but still works for now.



    Yes, MoltenVK works fine. Alternatively, you can also use WebGPU (there are C++ and Rust native libs) which is a simpler but more limiting API.


    WebGPU has no support for tensor cores (or their Apple Silicon equivalents). Vulkan has an Nvidia extension for it, is there any way to make MoltenVK use simdgroup_matrix instructions in compute shaders?


    AFAIK, MoltenVK doesn't. Dawn (Google's C++ WebGPU implementation) does have some experimental support for it [0][1].

    [0] https://issues.chromium.org/issues/348702031

    [1] https://github.com/gpuweb/gpuweb/issues/4195



    And is stuck with C99, versus C++20, Fortran, Julia, Haskell, C#, anything else someone feels like targeting PTX with.


    Technically, OpenCL can also include inline PTX assembly in kernels (unlike any compute shader API I've ever seen), which is relevant for targeting things like tensor cores. You're absolutely right about the language limitation, though.


    No they aren't, because they lack the polyglot support from CUDA and as you acknowledge the debugging experience and ecosystem sucks.


    why do you need to run across all those platforms? what's the cost benefit for doing so?


    Well it really depends on the kind of work you're doing. My (non-AI) software allows users to run my algorithms on whatever server-side GPU or local device they have. This is a big advantage IMO.


    To deliver, you need to make Rust target the GPU in a general way, like some IR, and then may be compile that into GPU machine code for each GPU architecture specifically.

    So this project is a dead end, because it's them who are these "others" - they are developing it and they are doing it wrong.



    Plus IDE support, Nsight level debugging, GPU libraries, yes most likely bound to fail unless NVidia, like it happened with other languages sees enough business value to give an helping hand.

    They are already using Rust in Dynamo, even though the public API is Python.



    CUDA is the easiest-to-use and most popular GPGPU framework. I agree that it's unfortunate there aren't good alternatives! As kouteiheika pointed out, you can use Vulkan (Or OpenCL), but they are not as pleasant.


    It defeats the purpose. Easy to use should be something in Rust, not CUDA.






    Join us for AI Startup School this June 16-17 in San Francisco!


    Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



    Search:
    联系我们 contact @ memedata.com