(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=43977901

Hacker News用户对Uber的“FixrLeak”(一款用于修复Java资源泄漏的生成式AI工具)持怀疑态度,认为SonarQube和Checkstyle等现有的静态分析工具已经可以有效地解决这个问题。评论者质疑AI带来的价值,认为它更多的是营销炒作,而非真正的工程效益,因为静态扫描器很容易就能识别出这些问题。一些人难以相信Uber的工程师会大规模地忽略`try-with-resources`,并且该工具专注于微不足道的泄漏,而忽略了更复杂的情况,例如`UNBOUNDED_QUEUE.add(item)`。用户还担心使用LLM执行此任务的成本效益和能源浪费,并提出了更简单、更便宜的解决方案。“FixrLeak”整体上被认为是AI的一种不必要且低效的应用,可能是由营销驱动而非真正的技术需求。


原文
Hacker News new | past | comments | ask | show | jobs | submit login
Fixrleak: Fixing Java Resource Leaks with GenAI (uber.com)
15 points by carimura 20 hours ago | hide | past | favorite | 15 comments










A lot of commenters point out that there already are many established static checkers that do this. That is not what Uber attempts here.

Uber is not proposing a static checker. They even use sonar qube in their architecture. They propose using an LLM to resolve the leak detected by sonar qube.



> “Resource leaks, where resources like files, database connections, or streams aren’t properly released after use, are a persistent issue in Java applications”

This was true maybe back in 2005. Java has had try-with-resources for a loooong time. As I see it this has been the dominant idiom for ages, for handling resources that might leak.



People tend to forget. Stream-API is a good candidate, that people like to not consider for leakage. If you don't own your stream, if you do not definitly know, that your stream comes from a collection, then ya better close it with a try-block.


> This analysis ensures that FixrLeak skips functions where resources are passed as parameters, returned, or stored in fields, as these resources often outlive the function’s scope.

> FixrLeak delivers precise, reliable fixes while leaving more complex cases for advanced analysis at the caller level.

In other words, this will only fix trivial leaks, which are best seen as a language design issue and can be fixed by RAII, reference counting, etc.

It won't fix the more insidious leaks like `UNBOUNDED_QUEUE.add(item)` that are more likely to pass through code review in the first place.



How much effort was spent automating this to fix 112 instances across Uber’s code base? I assume code reviews would catch any new issues so this seems like overkill for a small one-off task?


Spotbugs or checkstyle etc... would catch these. What does AI add here?


It gives marketing team at Uber to say "wE uSe AI hErE!!1". C-levels approve since anything AI gets a nice pump.

Engineering wise. This adds nothing. It’s an absolute waste of compute and energy to run this through LLMs



So you tell me those 200-600k software engineers that can easily solve leetcode hard are so incompetent they missed using try-with-resources at such scale, they needed to introduce new AI tooling to fix it?

Hey Uber, I am from the EU, I usually can‘t even solve leetcode medium but I will write you scalable, spotless Java for a third of the salary.

Our industry and its economics are a joke.



So you write bug-free scalable code 100% in any jobs you ever worked for?

I guess we don’t need QA and Dev/Staging environment



Can the QA team? How does the dev/staging environment help writing less buggy code?


But can you leetcode heh.


Using AI when a static scanner like SonarQube easily picks up these types of resource leaks, especially in Java.

Peak waste.

What’s next?

"Get rid of your GitHub dependabot alerts and replace it with my shitty ChatGPT wrapper”



> Using AI when a static scanner like SonarQube easily picks up these types of resource leaks, especially in Java.

Exactly.

It's very disappointing to see that Uber engineers would rather trust an LLM to that claims to spot these issues when a battle-tested scanner such as SonarQube would have caught this in the first place.

The LLM hype-train is almost just as bad as the JavaScript hype train in the 2010s where some of the worst technologies are used on everything.



stupid af


Why exactly do you need LLMs for this when efficient alternatives like SonarQube or checkstyle already do this without the expensive waste LLMs create?

This adds little to no technical advantage over existing solutions what so ever for this particular use case.







Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



Search:
联系我们 contact @ memedata.com