英伟达正准备销售服务器,而不仅仅是GPU和组件。
Nvidia is gearing up to sell servers instead of just GPUs and components

原始链接: https://www.tomshardware.com/tech-industry/artificial-intelligence/jp-morgan-says-nvidia-is-gearing-up-to-sell-entire-ai-servers-instead-of-just-ai-gpus-and-componentry-jensens-master-plan-of-vertical-integration-will-boost-profits-purportedly-starting-with-vera-rubin

英伟达据报道正在计划对其人工智能硬件供应链进行重大调整,伴随即将推出的Vera Rubin平台。英伟达不再向合作伙伴销售组件用于服务器组装,而是计划直接向他们发货完全组装的10级(L10)计算托盘——包括GPU、CPU、散热和接口。 据摩根大通称,此举简化了原始设计制造商(ODM)的流程,但可能会降低他们的利润率,因为英伟达将承担更多生产。合作伙伴将主要负责机架级集成——机箱制造、电源安装和最终测试,而不是复杂的服务器设计。 这一变化是由Rubin GPU日益复杂的性能需求(可能高达2.3kW)驱动的,旨在通过与富士康等EMS供应商的规模经济,加速VR200的部署。最终,英伟达将控制服务器的核心“计算引擎”,将合作伙伴的角色转变为系统集成和支持。英伟达Kyber机架级解决方案的长期影响还有待观察。

## Nvidia 扩展至服务器销售:摘要 有报道称,Nvidia 正在转向销售完全组装的服务器系统——包括 CPU、GPU、散热和网络设备——而不仅仅是组件。 这代表着垂直整合的重要一步,超越了像 GB200 平台这样的部分集成解决方案。 此举旨在从价值链中获取更多利润,并可能将客户“沙盒”在其生态系统内。 讨论表明,这符合 Nvidia 长期愿景,可追溯到 2019/2020 年,即提供完整的计算解决方案,甚至可能提供“计算即服务”。 一些人将其与苹果或 IBM 的大型机模式相提并论。 有人担心客户可能被锁定,以及 Nvidia 与其合作伙伴直接竞争的风险。 另一些人指出,鉴于 Nvidia 的高估值和当前的供应限制,可能会提高盈利能力。 这种转变也可能影响云提供商,可能将其转变为托管数据中心。 最终,此举被视为一项战略努力,旨在控制更多人工智能基础设施堆栈。
相关文章

原文

The launch of Nvidia's Vera Rubin platform for AI and HPC next year could mark significant changes in the AI hardware supply chain as Nvidia plans to ship its partners fully assembled Level-10 (L10) VR200 compute trays with all compute hardware, cooling systems, and interfaces pre-installed, according to J.P. Morgan (via @Jukanlosreve). The move would leave major ODMs with very little design or integration work, making their lives easier, but would also trim their margins in favor of Nvidia's. The information remains unofficial at this stage.

Starting with the VR200 platform, Nvidia is reportedly preparing to take over production of fully built L10 compute trays with a pre-installed Vera CPU, Rubin GPUs, and a cooling system instead of allowing hyperscalers and ODM partners to build their own motherboards and cooling solutions. This would not be the first time the company has supplied its partners with a partially integrated server sub-assembly: it did so with its GB200 platform when it supplied the whole Bianca board with key components pre-installed. However, at the time, this could be considered as L7 – L8 integration, whereas now the company is reportedly considering going all the way to L10, selling the whole tray assembly — including accelerators, CPU, memory, NICs, power-delivery hardware, midplane interfaces, and liquid-cooling cold plates — as a pre-built, tested module.

联系我们 contact @ memedata.com