(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=43692476

Mrge.io(YC S25)推出了一款基于AI的代码审查平台,旨在加快代码合并速度并减少bug。创始人Allis和Paul亲身经历了代码审查瓶颈。Mrge连接到GitHub(GitLab支持即将推出),利用AI在一个安全、短暂的容器中分析拉取请求,并拥有完整的代码库上下文。AI审查员利用shell访问和语言服务器协议进行深入的代码导航,模拟人工审查员。审查结束后,沙箱会被销毁,代码也会被删除以确保安全。Mrge的网络应用以逻辑的方式呈现更改,突出显示重要的差异,以便更快地进行人工审查。Mrge受到Linear和Superhuman等工具的启发,优先考虑完善的用户体验,提供桌面应用、键盘快捷键和逻辑文件分组功能。平台在早期阶段免费使用,最终将对闭源项目按用户收费,开源项目将继续免费。

Mrge.io(YC S25)推出了一款基于AI的代码审查平台,旨在加快代码合并速度并减少bug。创始人Allis和Paul亲身经历了代码审查瓶颈。Mrge连接到GitHub(GitLab支持即将推出),利用AI在一个安全、短暂的容器中分析拉取请求,并拥有完整的代码库上下文。AI审查员利用shell访问和语言服务器协议进行深入的代码导航,模拟人工审查员。审查结束后,沙箱会被销毁,代码也会被删除以确保安全。Mrge的网络应用以逻辑的方式呈现更改,突出显示重要的差异,以便更快地进行人工审查。Mrge受到Linear和Superhuman等工具的启发,优先考虑完善的用户体验,提供桌面应用、键盘快捷键和逻辑文件分组功能。平台在早期阶段免费使用,最终将对闭源项目按用户收费,开源项目将继续免费。

原文
Hacker News new | past | comments | ask | show | jobs | submit login
Launch HN: mrge.io (YC X25) – Cursor for code review
13 points by pomarie 54 minutes ago | hide | past | favorite | 3 comments
Hey HN, we’re building mrge (https://www.mrge.io/home), an AI code review platform to help teams merge code faster with fewer bugs. Our early users include Better Auth, Cal.com, and n8n—teams that handle a lot of PRs every day.

Here’s a demo video: https://www.youtube.com/watch?v=pglEoiv0BgY

We (Allis and Paul) are engineers who faced this problem when we worked together at our last startup. Code review quickly became our biggest bottleneck—especially as we started using AI to code more. We had more PRs to review, subtle AI-written bugs slipped through unnoticed, and we (humans) increasingly found ourselves rubber-stamping PRs without deeply understanding the changes.

We’re building mrge to help solve that. Here’s how it works:

1. Connect your GitHub repo in two clicks (and optionally download our desktop app). Gitlab support is on the roadmap!

2. AI Review: When you open a PR, our AI reviews your changes directly in an ephemeral and secure container. It has context into not just that PR, but your whole codebase, so it can pick up patterns and leave comments directly on changed lines. Once the review is done, the sandbox is torn down and your code deleted – we don’t store it for obvious reasons.

3. Human-friendly review workflow: Jump into our web app (it’s like Linear but for PRs). Changes are grouped logically (not alphabetically), with important diffs highlighted, visualized, and ready for faster human review.

The AI reviewer works a bit like Cursor in the sense that it navigates your codebase using the same tools a developer would—like jumping to definitions or grepping through code.

But a big challenge was that, unlike Cursor, mrge doesn’t run in your local IDE or editor. We had to recreate something similar entirely in the cloud.

Whenever you open a PR, mrge clones your repository and checks out your branch in a secure and isolated temporary sandbox. We provision this sandbox with shell access and a Language Server Protocol (LSP) server. The AI reviewer then reviews your code, navigating the codebase exactly as a human reviewer would—using shell commands and common editor features like "go to definition" or "find references". When the review finishes, we immediately tear down the sandbox and delete the code—we don’t want to permanently store it for obvious reasons.

We know cloud-based review isn't for everyone, especially if security or compliance requires local deployments. But a cloud approach lets us run SOTA AI models without local GPU setups, and provide a consistent, single AI review per PR for an entire team.

The platform itself focuses entirely on making human code reviews easier. A big inspiration came from productivity-focused apps like Linear or Superhuman, products that show just how much thoughtful design can impact everyday workflows. We wanted to bring that same feeling into code review.

That’s one reason we built a desktop app. It allowed us to deliver a more polished experience, complete with keyboard shortcuts and a snappy interface.

Beyond performance, the main thing we care about is making it easier for humans to read and understand code. For example, traditional review tools sort changed files alphabetically—which forces reviewers to figure out the order in which they should review changes. In mrge, files are automatically grouped and ordered based on logical connections, letting reviewers immediately jump in.

We think the future of coding isn’t about AI replacing humans—it’s about giving us better tools to quickly understand high-level changes, abstracting more and more of the code itself. As code volume continues to increase, this shift is going to become increasingly important.

You can sign up now (https://www.mrge.io/home). mrge is currently free while we're still early. Our plan for later is to charge closed-source projects on a per-seat basis, and to continue giving mrge away for free to open source ones.

We’re very actively building and would love your honest feedback!











It looks like graphite.dev has pivoted into this space too. Which is annoying, because I'm interested in graphite.dev's core non-AI product. Which appears to be stagnating from my perspective -- they still don't have gitlab support after several years.


This looks like a cool solve for this problem. Some of the other tools I tried didn't seem to contextualize the app, so the comments were surface level and trite.

I'm on Bitbucket so will have to wait :)



Thanks, really appreciate that! Yeah, giving the AI the ability to fetch the context it needs was a big challenge (since larger codebases can't all fit in an LLM's context window)

And totally hear you on Bitbucket—it's definitely on our roadmap. Would love to loop back with you once we get closer on that front!







Join us for AI Startup School this June 16-17 in San Francisco!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



Search:
联系我们 contact @ memedata.com