VectorWare – `rust-GPU` 和 `rust-CUDA` 的开发者作品
VectorWare – from creators of `rust-GPU` and `rust-CUDA`

原始链接: https://www.vectorware.com/blog/announcing-vectorware/

## GPU原生软件的转变:摘要 我们正进入一个快速技术变革的时期,这种变革并非由新的*设备*驱动,而是由计算架构的根本转变——从CPU到GPU驱动的。虽然人工智能和自动驾驶汽车等技术是可见的结果,但核心变化是CPU和GPU日益重要和融合。然而,软件却没有跟上步伐;CPU软件已经成熟,而GPU软件仍然不发达且复杂。 VectorWare认为这创造了一个巨大的机会:一个建立在**GPU原生应用**之上的新软件行业。目前,即使是“GPU加速”软件也依赖CPU进行控制,从而限制了GPU的真正潜力。VectorWare旨在构建GPU控制的软件,从而在现有和新的应用中释放显著的性能提升——从人工智能和模拟到传统上受CPU限制的任务。 该团队由Rust、图形和大型科技公司(Apple、Mozilla、Facebook)的专家组成,正在构建一个低级别的软件堆栈和工具,以使GPU原生编程易于访问。他们已经获得了种子资金,并正在积极招聘专门从事Rust、编译器、图形和Linux内核开发的工程师来实现这一愿景。他们旨在成为这个新硬件时代的“电子表格”——创建用于广泛采用GPU的平台和工具。

## VectorWare:Rust GPU 编程的新方法 VectorWare 由 `rust-GPU` 和 `rust-CUDA` 团队创建,旨在构建“GPU 原生”软件——将控制权从 CPU 转移到 GPU 本身。这与当前 GPU 应用程序不同,后者主要依赖 CPU 来协调 GPU 任务。 该团队承认过去在项目归属方面存在不准确之处,并澄清了他们目前作为 `rust-GPU` 和 `rust-CUDA` 的维护者和投资者的角色,同时也探索新的技术途径。讨论围绕 GPU 编程的挑战展开,一些人对完全 GPU 控制系统的可行性表示怀疑,原因是硬件限制以及 CPU 基于协调的固有优势。 WGPU、Ash 和 Cudarc 等替代方案被强调为目前在 Rust 中进行 GPU 加速的有效选择。VectorWare 计划分享演示,展示他们的愿景,重点是利用 GPU 的全部潜力并解决 GPU 领域的当前软件不成熟问题。他们还在调查与硬件功能和动态代码加载相关的限制。
相关文章

原文

Our thesis

Technology shifts happen gradually, then suddenly. We are in the suddenly part. New technologies like LLMs, generative AI, self-driving cars, drones, AR/VR, and robots are reshaping the world. But they are not the technology shift. They are the new applications enabled by it. The real shift is from CPU to GPU.

The importance of CPUs and GPUs has inverted. To compete, CPUs are adding GPU features while GPUs are adding CPU features. CPUs and GPUs are converging.

Software has not kept pace. CPU software is advanced, standardized, and familiar. GPU software is primitive, bespoke, and weird. Most programmers still focus on the CPU.

We believe we are at the start of a new software industry. We intend to lead it.


GPU-native software

There are two broad classes of applications:

  1. GPU applications such as AI, computer vision, machine learning, scientific simulations, and graphics. These require GPUs and are driving most demand, investment, and improvements in compute hardware today.
  2. CPU applications, which includes nearly everything else.

If you look at existing GPU applications, their software implementations aren't truly GPU-native. Instead, they are architected as traditional CPU software with a GPU add-on. For example, pytorch uses the CPU by default and GPU acceleration is opt-in. Even after opting in, the CPU is in control and orchestrates work on the GPU. Furthermore, if you look at the software kernels that run on the GPU they are simplistic with low cyclomatic complexity. This is not unique to pytorch. Most software is CPU-only, a small subset is GPU-aware, an even smaller subset is GPU-only, and no software is GPU-native.

We are building software that is GPU-native. We intend to put the GPU in control. This does not happen today due to the difficulty of programming GPUs, the immaturity of GPU software and abstractions, and the relatively few developers targeting GPUs.

With the advent of GPU databases, we are just starting to see CPU-based applications migrate to GPUs. As CPUs and GPUs converge, we believe that all software will begin to leverage GPUs to varying degrees. This is a huge opportunity.

At VectorWare we are excited to focus on both improving GPU applications and migrating CPU applications to the GPU. We are building supporting tools and a new low-level software stack to make GPU-native software a reality.

Think of us like:

Spreadsheets

Killer app making the new hardware ubiquitous

=

AI

Killer app making the new hardware ubiquitous

Creates platforms, apps, and developer tools for the ubiquitous hardware

=

Creates platforms, apps, and developer tools for the ubiquitous hardware


Who we are

Our company is comprised of Rust compiler team members, open source maintainers of rust-gpu, rust-cuda, and rustc_codegen_clr, as well as graphics experts from the gaming industry. In the past we've worked on everything from operating systems at Apple, browsers at Mozilla, web and mobile apps at Facebook, and graphics technology at Embark Studios and Frozenbyte. We've led developer tools and infrastructure teams and even built our own IDE long before similar tools became billion-dollar AI companies. You can read more about us on our team page.

We had overwhelming interest from investors and a heavily oversubscribed seed round. Ultimately, we chose to raise a smaller amount from people we know well and have worked with at previous companies. We met Dan Portillo, co-founder of The General Partnership, while working at Mozilla and are thrilled to have him as our lead investor. Our angel investors include:

  • John Lilly, an experienced investor, operator, and leader. We worked with him at Mozilla where he was the CEO.
  • Patrick Kavanagh, one of the first angel investors in Robinhood and an early investor in hot AI startups such as Manus and Plaud. We worked with him at Robinhood where he was the head of international and crypto.
  • Nick Candito, a career entrepreneur who has seen three early-stage ventures scale to nearly $900M in acquisition value and has been part of over 300 private investments ($75M allocated, ~20 unicorns, 15+ exits, 10 funds). We met him when he was founding Progressly (later acquired by Box).

These folks are experienced investors as well as founders and operators who understand the challenges of building. We're grateful they chose to invest their time and money in us.


We're hiring

We are growing our early team and are hiring for a few key roles.

GPU-native application engineering

  • Goal: Ship GPU-native applications and build the missing abstractions that make them feel ordinary. Write "X for the GPU" where X is virtually any application.
  • Ideal background: Rust expertise plus experience with GPUs (CUDA, Vulkan, ROCm, CANN) and/or machine learning. Alternatively, the creator or maintainer of widely used Rust software with an interest in learning about GPUs.
  • Also welcome: GPU or ML experts who want to learn Rust.

Compiler engineering & language design

  • Goal: Shape the low-level stack and language features that keep GPU-native software safe, performant, ergonomic, and reusable.
  • Ideal background: Contributor to the Rust compiler, preferably including wasm, Cranelift, or LLVM. Or experience writing implementations of other languages or emulators in Rust.
  • Also welcome: Language or tooling experts (wasm, Triton, LLVM, MLIR, Mojo, shader compilers) ready to learn Rust.

Userland graphics engineering

  • Goal: Modify the graphics stack that GPU-native applications depend on to improve the safety, performance, ergonomics, and reusability of GPU-native applications. This includes APIs like Vulkan, plus stacks such as Mesa, DRM, Wayland, llvmpipe, MoltenVK, and KosmicKrisp.
  • Ideal background: Rust and graphics experience with a deep understanding of GPU APIs and architectures or compatibility layers.
  • Also welcome: Graphics engineers who want to learn Rust.

Linux kernel engineering

  • Goal: Push the OS to better support GPU-native applications, improving safety, performance, ergonomics, and reusability from the kernel up when running in the datacenter.
  • Ideal background: Linux kernel developers working on Rust-based graphics, storage, or networking drivers. Working directly on Rust for Linux would be great too.
  • Also welcome: Seasoned Linux kernel engineers who want to learn Rust and GPUs.

For more information and to get in touch, please visit our jobs page.

联系我们 contact @ memedata.com