diffucoder:理解和改进代码gen的蒙版扩散模型
DiffuCoder: Understanding and Improving Masked Diffusion Models for Code Gen

原始链接: https://arxiv.org/abs/2506.20639

Diffucoder是一种7B扩散大语模型(DLLM),用于代码生成,对130B代码令牌进行了培训。该研究系统地研究了DLLM在编码方面的DENOSISING PROCESS和增强学习方法(RL)方法,旨在了解和提高其性能。该研究表明,DLLM与自回旋(AR)模型有所不同,它具有控制其一代因果关系而不依赖半AR解码并提高采样温度以使代币选择及其生成顺序多样化的能力。为了增强RL训练,作者提出了耦合的GRPO,这是一种新型的抽样方案,旨在降低令牌对数可能估计的方差并保持训练效率。实验表明,耦合GRPO显着提高了Diffucoder在代码生成基准上的性能(将评估Plus提高 +4.4%),并降低了解码过程中对AR偏见的依赖。这项工作为DLLM生成力学提供了宝贵的见解,并引入了有效的扩散本地RL训练框架。

黑客新闻新闻|过去|评论|问|显示|工作| submitlogIndiffucoder:1天前的理解和改进代码gen(arxiv.org)的蒙版扩散模型(arxiv.org)5分|隐藏|过去|最喜欢的|讨论 考虑申请YC的2025年秋季批次!申请开放至8月4日 指南|常见问题解答|列表| API |安全|法律|申请YC |接触 搜索:
相关文章

原文

View a PDF of the paper titled DiffuCoder: Understanding and Improving Masked Diffusion Models for Code Generation, by Shansan Gong and Ruixiang Zhang and Huangjie Zheng and Jiatao Gu and Navdeep Jaitly and Lingpeng Kong and Yizhe Zhang

View PDF HTML (experimental)
Abstract:Diffusion large language models (dLLMs) are compelling alternatives to autoregressive (AR) models because their denoising models operate over the entire sequence. The global planning and iterative refinement features of dLLMs are particularly useful for code generation. However, current training and inference mechanisms for dLLMs in coding are still under-explored. To demystify the decoding behavior of dLLMs and unlock their potential for coding, we systematically investigate their denoising processes and reinforcement learning (RL) methods. We train a 7B dLLM, \textbf{DiffuCoder}, on 130B tokens of code. Using this model as a testbed, we analyze its decoding behavior, revealing how it differs from that of AR models: (1) dLLMs can decide how causal their generation should be without relying on semi-AR decoding, and (2) increasing the sampling temperature diversifies not only token choices but also their generation order. This diversity creates a rich search space for RL rollouts. For RL training, to reduce the variance of token log-likelihood estimates and maintain training efficiency, we propose \textbf{coupled-GRPO}, a novel sampling scheme that constructs complementary mask noise for completions used in training. In our experiments, coupled-GRPO significantly improves DiffuCoder's performance on code generation benchmarks (+4.4\% on EvalPlus) and reduces reliance on AR bias during decoding. Our work provides deeper insight into the machinery of dLLM generation and offers an effective, diffusion-native RL training framework. this https URL.
From: Shansan Gong [view email]
[v1] Wed, 25 Jun 2025 17:35:47 UTC (2,004 KB)
[v2] Thu, 26 Jun 2025 15:46:40 UTC (2,005 KB)
联系我们 contact @ memedata.com