Agent Lightning:使用强化学习训练Agent(无需代码修改)
Agent Lightning: Train agents with RL (no code changes needed)

原始链接: https://github.com/microsoft/agent-lightning

## Agent Lightning:无需重写代码即可优化AI智能体 Agent Lightning 是一款易于使用的工具,可利用强化学习和提示词优化等技术来优化AI智能体——**无需更改现有的智能体代码。** 它兼容 LangChain、AutoGen 等流行框架,甚至适用于原生的 Python OpenAI 设置。 该系统通过追踪智能体交互(提示词、工具调用、奖励)并将这些数据提供给所选算法来工作。 然后,这些算法会优化智能体资源,例如提示词或策略,从而持续提高性能。 主要功能包括多智能体系统中的选择性优化、用于数据管理的中央“LightningStore”以及用于管理学习循环的灵活的Trainer。 Agent Lightning 优先考虑简单性,让开发者专注于他们的核心想法,而不是复杂的 инфраструктура。 DeepWerewolf 和 AgentFlow 等示例展示了其功能。 它是开源的(MIT 许可证),并欢迎贡献——详情请参见他们的 Discord 和文档站点:[https://arxiv.org/abs/2508.03680](https://arxiv.org/abs/2508.03680)。

微软发布了“Agent Lightning”框架,旨在简化AI智能体的强化学习(RL)训练,而无需修改智能体本身的任何代码。它本质上连接现有的算法,目标是简化训练过程,而不是取代核心RL方法。 Hacker News社区的初步反应不一。一些人认为它在优化LLM智能体方面很有前景,并可能为DSPy等工具提供替代方案,而另一些人则持怀疑态度。 提出的担忧包括缺乏文档、由于依赖于频繁更新的“verl”库而可能存在的不稳定性,以及对其在处理稀疏奖励等复杂任务中的有效性的质疑。 一些评论者开玩笑地猜测,该项目文档的大量内容是由LLM生成的,并指出其风格 необычный。最终,观点从谨慎乐观到不屑一顾,一些人认为该项目优先考虑复杂的展示,而非清晰的功能。
相关文章

原文

Agent-lightning-banner

Test Documentation PyPI version License Discord

The absolute trainer to light up AI agents.

Join our Discord community to connect with other users and contributors.

  • Turn your agent into an optimizable beast with ZERO CODE CHANGE (almost)! 💤
  • Build with ANY agent framework (LangChain, OpenAI Agent SDK, AutoGen, CrewAI, Microsoft Agent Framework...); or even WITHOUT agent framework (Python OpenAI). You name it! 🤖
  • Selectively optimize one or more agents in a multi-agent system. 🎯
  • Embraces Algorithms like Reinforcement Learning, Automatic Prompt Optimization, Supervised Fine-tuning and more. 🤗

Read more on our documentation website.

Agent-Lightning Core Quickstart

pip install agentlightning

Please refer to our installation guide for more details.

To start using Agent-lightning, check out our documentation and examples.

  • DeepWerewolf — A case study of agent RL training for the Chinese Werewolf game built with AgentScope and Agent Lightning.
  • AgentFlow — A modular multi-agent framework that combines planner, executor, verifier, and generator agents with the Flow-GRPO algorithm to tackle long-horizon, sparse-reward tasks.

Agent Lightning keeps the moving parts to a minimum so you can focus on your idea, not the plumbing. Your agent continues to run as usual; you can still use any agent framework you like; you drop in the lightweight agl.emit_xxx() helper, or let the tracer collect every prompt, tool call, and reward. Those events become structured spans that flow into the LightningStore, a central hub that keeps tasks, resources, and traces in sync.

On the other side of the store sits the algorithm you choose, or write yourself. The algorithm reads spans, learns from them, and posts updated resources such as refined prompt templates or new policy weights. The Trainer ties it all together: it streams datasets to runners, ferries resources between the store and the algorithm, and updates the inference engine when improvements land. You can either stop there, or simply let the same loop keep turning.

No rewrites, no lock-in, just a clear path from first rollout to steady improvement.

Agent-lightning Architecture

If you find Agent Lightning useful in your research or projects, please cite our paper:

@misc{luo2025agentlightningtrainai,
      title={Agent Lightning: Train ANY AI Agents with Reinforcement Learning},
      author={Xufang Luo and Yuge Zhang and Zhiyuan He and Zilong Wang and Siyun Zhao and Dongsheng Li and Luna K. Qiu and Yuqing Yang},
      year={2025},
      eprint={2508.03680},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2508.03680},
}

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

This project has been evaluated and certified to comply with the Microsoft Responsible AI Standard. The team will continue to monitor and maintain the repository, addressing any severe issues, including potential harms, if they arise.

This project is licensed under the MIT License. See the LICENSE file for details.

联系我们 contact @ memedata.com