(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=43426164

Inngest 推出了 AgentKit,一个基于 TypeScript 的多智能体库,作为 OpenAI Agents SDK 的替代方案。AgentKit 强调确定性路由、多模型提供商支持和无缝生产部署。它专为寻求可靠智能体框架的 TypeScript 开发人员而设计。 AgentKit 的核心组件包括:智能体(带有工具的大语言模型调用)、网络(具有共享状态的智能体协作)、状态(对话历史和用于路由的状态机)以及路由器(基于代码或大语言模型的编排)。该框架使用一个循环,其中网络检查状态,通过路由器确定下一个要调用的智能体,然后执行该智能体以更新状态。 AgentKit 旨在通过 Inngest 的本地 DevServer 易于测试和调试,提供跟踪、回放和调试工具。它还与 Inngest 集成,可在生产环境中实现容错执行和可扩展性。它是根据 Apache 2 许可证发布的开源软件。


原文
Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: AgentKit – JavaScript Alternative to OpenAI Agents SDK with Native MCP (github.com/inngest)
43 points by tonyhb 2 hours ago | hide | past | favorite | 10 comments
Hi HN! I’m Tony, co-founder of Inngest. I wanted to share AgentKit, our Typescript multi-agent library we’ve been cooking and testing with some early users in prod for months.

Although OpenAI’s Agents SDK has been launched since, we think an Agent framework should offer more deterministic and flexible routing, work with multiple model providers, embrace MCP (for rich tooling), and support the unstoppable and growing community of TypeScript AI developers by enabling a smooth transition to production use cases.

This is why we are building AgentKit, and we’re really excited about it for a few reasons:

Firstly, it’s simple. We embrace KISS principles brought by Anthropic and HuggingFace by allowing you to gradually add autonomy to your AgentKit program using primitives:

- Agents: LLM calls that can be combined with prompts, tools, and MCP native support.

- Networks: a simple way to get Agents to collaborate with a shared State, including handoff.

- State: combines conversation history with a fully typed state machine, used in routing.

- Routers: where the autonomy lives, from code-based to LLM-based (ex: ReAct) orchestration

The routers are where the magic happens, and allow you to build deterministic, reliable, testable agents.

AgentKit routing works as follows: the network calls itself in a loop, inspecting the State to determine which agents to call next using a router. The returned agent runs, then optionally updates state data using its tools. On the next loop, the network inspects state data and conversation history, and determines which new agent to run.

This fully typed state machine routing allows you to deterministically build agents using any of the effective agent patterns — which means your code is easy to read, edit, understand, and debug.

This also makes handoff incredibly easy: you define when agents should hand off to each other using regular code and state (or by calling an LLM in the router for AI-based routing). This is similar to the OpenAI Agents SDK but easier to manage, plan, and build.

Then comes the local development and moving to production capabilities.

AgentKit is compatible with Inngest’s tooling, meaning that you can test agents using Inngest’s local DevServer, which provides traces, inputs, outputs, replay, tool, and MCP inputs and outputs, and (soon) a step-over debugger so that you can easily understand and visually see what's happening in the agent loop.

In production, you can also optionally combine AgentKit with Inngest for fault-tolerant execution. Each agent’s LLM call is wrapped in a step, and tools can use multiple steps to incorporate things like human-in-the-loop. This gives you native orchestration, observability, and out-of-the-box scale.

You will find the documentation as an example of an AgentKit SWE-bench and multiple Coding Agent examples.

It’s fully open-source under the Apache 2 license.

If you want to get started:

- npm: npm i @inngest/agent-kit

- GitHub: https://github.com/inngest/agent-kit

- Docs: https://agentkit.inngest.com/overview

We’re excited to finally launch AgentKit; let us know what you think!











How's the performance compared to LangGraph? I'm working on a project that needs to handle high throughput agent interactions.


Akka focuses on enterprise agentic with a focus on creating certainty and solving scale problems. We have a customer, Swiggy, which is >3M inferences per second for a blended set of models, both ML and LLMs, with a p99 latency of roughly 70ms.

This level of throughput is achieved by including memory database within the agentic process and then the clustering system automatically shards and balances memory data across nodes with end user routing built in. Combined with non-blocking ML invocations with back pressure you get the balance for performance.



The framework itself is super low overhead. You can deploy this anywhere, and if you deploy to inngest.com the P99 latency of starting agents is sub-50ms (and you can also realtime stream steps, tools, or model responses to the browser).

One of the main differences is the DX — _how_ you define the agentic worklflows is far cleaner, so it's both faster to build and fast in production.



This looks really good. I'm going to take a detailed look at this for sure. Thanks!


Interesting, have you considered adding benchmarks comparing AgentKit to other frameworks? Would help teams evaluating options


Really love the decoupling of the logic and the runtime for the actual tool calls.


good stuff


Yet another agent framework in an increasingly crowded space. What makes this truly different from LangChain, AutoGPT, or LlamaIndex beyond TypeScript support? Starting to feel like the agent ecosystem is becoming as fragmented as the JavaScript framework landscape, haha.


The main one is deterministic routing (https://agentkit.inngest.com/advanced-patterns/routing). Here's what that means:

Each agent builds up state via tool use. On each loop of the network, you inspect this state to figure out which agent to run next. You don't build DAGs or create odd graphs — you write regular code in a router.

Or, more generally:

* Each agent has a specific goal within a larger network. Several agents each working on smaller goals means easier prompt generation, testing, iteration, and a higher success rate.

* The network combines agents to achieve an overall objective, with shared state modified by each agent

* The network’s router inspects state and determines which agent should run next

* The network runs in a loop, calling the router on each iteration until all goals are met

* Agents run with updated conversation history and state on each loop iteration

Realistically the challenge with agents has classically been: how can I build something reliable, and how can this run in production reliably? These patterns are largely what we've seen work.



Thanks for explaining. But I guess I'm still not clear on how the work gets divvied up. Not tryin to be a hater - I'll have to give it a spin - but that part's a bit murky to me still.






Join us for AI Startup School this June 16-17 in San Francisco!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



Search:
联系我们 contact @ memedata.com