用于思考感知编辑和生成的 多模态扩散语言模型
Multimodal Diffusion Language Models for Thinking-Aware Editing and Generation

原始链接: https://github.com/tyfeld/MMaDA-Parallel

## MMaDA-Parallel:鲁棒的思维感知图像生成 现有的“思维感知”文本到图像模型通常在复杂任务上表现*下降*,因为顺序生成过程中存在错误传播。为了解决这个问题,研究人员推出了**ParaBench**,一个基准测试,揭示了推理文本和生成图像之间的不良对齐是关键问题。 他们的解决方案,**MMaDA-Parallel**,是一种新颖的并行多模态扩散框架。与顺序方法不同,它能够在整个图像创建过程中实现文本和图像之间的持续、双向交互。这进一步得到了**并行强化学习 (ParaRL)** 的增强,它使用语义奖励来确保模态之间的一致性。 在 ParaBench 上的实验表明,与最先进的 Bagel 模型相比,**输出对齐度提高了 6.9%**。MMaDA-Parallel 目前在合成数据集(环境、静物等)上表现出色,并且正在扩展以处理更多样化、真实世界的图像。代码和预训练模型(MMaDA-Parallel-A & MMaDA-Parallel-M)已公开可用。

## MMaDA-Parallel:一种新的多模态扩散方法 提出了一种新的框架,**MMaDA-Parallel**,用于更具交互性和一致性的文本和图像生成。它利用一种并行多模态扩散过程,允许在整个生成过程中进行连续的双向模态交互——本质上让文本“关注”图像,反之亦然。一个关键组成部分是**ParaRL**,一种使用语义奖励来维持跨模态一致性的策略。 讨论的重点在于图像之外的潜在应用,尤其是在编码方面。一些人认为,这种并行方法可以更好地模拟人类编码的迭代和编辑性质,优于当前自回归模型(如GPT-5),从而可能改善带有反馈循环(如代码检查工具、测试)的代码生成。 然而,其他人指出,现有的代码扩散模型尚未优于传统的LLM,并质疑在单个模态内进行并行处理(例如同时“思考”和“编码”)是否真正新颖,因为当前模型已经表现出内部交互。 此外,人们还担心模型表达的推理与其内部实际过程之间可能存在差异,以及用户体验是否适合文本生成。
相关文章

原文
Parallel Generation Demo

Demo: Parallel text-image generation in action.

While thinking-aware generation aims to improve performance on complex tasks, we identify a critical failure mode where existing sequential, autoregressive approaches can paradoxically degrade performance due to error propagation. To systematically analyze this issue, we propose ParaBench, a new benchmark designed to evaluate both text and image output modalities. Our analysis using ParaBench reveals that this performance degradation is strongly correlated with poor alignment between the generated reasoning and the final image. To resolve this, we propose a parallel multimodal diffusion framework that enables continuous, bidirectional interaction between text and images throughout the entire denoising trajectory. This model, MMaDA-Parallel, is trained with supervised finetuning and then further optimized by Parallel Reinforcement Learning (ParaRL), a novel strategy that applies semantic rewards along the trajectory to enforce cross-modal consistency. Experiments validate that our approach significantly improves cross-modal alignment and semantic consistency, achieving a 6.9% improvement in Output Alignment on ParaBench compared to the state-of-the-art model, Bagel, establishing a more robust paradigm for thinking-aware image synthesis.

Architecture of MMaDA-Parallel. During Training, image and text responses are masked and predicted in parallel with a uniform mask predictor. During Sampling, the model performs parallel decoding to generate both image and text responses jointly, enabling continuous cross-modal interaction.

Main Results

Qualitative comparison.

Main Results

Quantitative Results on ParaBench.

Note: Our model has been successfully validated on synthetic datasets focusing on environments, still life, architecture, and natural landscapes. Its performance on out-of-distribution inputs—such as human faces or real-world photographic imagery—has not yet been fully explored. We are actively expanding our training corpus to include more diverse datasets.

First, start with a torch environment with torch 2.3.1 or higher version, then install the following dependencies:

pip install -r requirements.txt

We provide two varients of MMaDA-Parallel with different tokenizers. MMaDA-Parallel-A is trained with tokenizer Amused-VQ, and MMaDA-Parallel-M is trained with tokenizer Magvitv2.

2. Experiencing Parallel Gen with MMaDA-Parallel-A

You can directly use the local gradio app to experience the parallel generation with MMaDA-Parallel-A:

Or you can use the inference script to generate the parallel generation results:

cd MMaDA-Parallel-A
python inference.py \
    --checkpoint tyfeld/MMaDA-Parallel-A \
    --vae_ckpt tyfeld/MMaDA-Parallel-A \
    --prompt "Replace the laptops with futuristic transparent tablets displaying holographic screens, and change the drink to a cup of glowing blue energy drink." \
    --image_path examples/image.png \
    --height 512 \
    --width 512 \
    --timesteps 64 \
    --text_steps 128 \
    --text_gen_length 256 \
    --text_block_length 32 \
    --cfg_scale 0 \
    --cfg_img 4.0 \
    --temperature 1.0 \
    --text_temperature 0 \
    --seed 42 \
    --output_dir output/results_interleave

3. Parallel Gen with MMaDA-Parallel-M

cd MMaDA-Parallel-M
python inference.py interleave_root=./interleave_validation  
@article{tian2025mmadaparallel,
  title={MMaDA-Parallel: Multimodal Large Diffusion Language Models for Thinking-Aware Editing and Generation},
  author={Tian, Ye and Yang, Ling and Yang, Jiongfan and Wang, Anran and Tian, Yu and Zheng, Jiani and Wang, Haochen and Teng, Zhiyang and Wang, Zhuochen and Wang, Yinjie and Tong, Yunhai and Wang, Mengdi and Li, Xiangtai},
  journal={arXiv preprint arXiv:2511.09611},
  year={2025}
}

This work is heavily based on MMaDA and Lumina-DiMOO. Thanks to all the authors for their great work.

联系我们 contact @ memedata.com