(comments)

原始链接: https://news.ycombinator.com/item?id=43982463

A Hacker News discussion revolves around the perception of AI's capabilities and its impact on the tech industry. One user humorously observes that managers believe AI can replace their subordinates' reports, but not their own direct reports. Other commenters express skepticism about AI's immediate ability to replace developers, with some noting the tendency to overuse AI tools even for simple tasks. Concerns are raised about AI being used as an excuse to freeze junior hiring and cut salaries. Some suggest that offshore developers, known for variable quality, might be replaced by AI sooner due to AI's advantages in speed and consistency. However, others argue that AI models are trained on code potentially biased by offshore practices. The discussion also touches upon the debate of whether AI can replace venture capitalists, with differing opinions on the unique skills required for the job. Overall, the conversation highlights a mix of excitement, skepticism, and concern regarding the hype and actual capabilities of AI in the tech industry.


原文
Hacker News new | past | comments | ask | show | jobs | submit login
Developers, don't despair, big tech and AI hype is off the rails again (cicero.sh)
48 points by matt-cicero 1 day ago | hide | past | favorite | 30 comments










My lemma: “No one thinks their direct reports can be replaced by AI. Everyone thinks their direct reports’ reports can be replaced by AI.”


As one of those reports' reports, I have noticed that these decision maker business types love AI for their own job too. A teammate had to email another department for approval on some infra and our manager's manager told him to run it through our firm's proprietary LLM to touch it up when it was essentially "I'm on team X and we need approval to use Y for Z, is that ok?" Makes me wonder what is even going on in her brain if she thought something so simple needed an AI touchup.


That’s how we decision makers tell each other we are transforming our teams to AI first ;-). Touching up a text is literally the shallowest way to apply AI, so it figures. The schizophrenic practice of asking for AI savings and cutting based on spans of control is an indication of how even big players, that previously had team topology maturity, such as Amazon, are losing the script.


We had a chat where the text touch up director was asking all her teams to bump up test coverage using copilot except our team is already very well covered so our team hasn't really contributed. But the other teams with 40% ish are asked to submit their "prompt of the week" and every time it's longer and more convoluted than just writing out unit tests. Really incredible stuff. Also what do you mean re team topology maturity?


I'm supposed to be SRE... the industry is so off the rails my job is better described as "YAML peddler".

Not worried in the slightest. Just exhausted and annoyed.



The big problem is a bunch of folks actually take these things seriously and use it as an excuse to freeze the junior hiring pipeline.

At the senior levels this is not actually believed by the powers that be, since a bunch of hiring is still happening to compensate for overdone layoffs in spots, etc.



The big corps poured shit ton of money into AI, so they have to cut the salary of human employees or just cut them.


Everyone wants the benefits of AI for themselves, but doesn't want others to benefit from AI: screenwriters and studios, college students and professors, etc.

It's a very natural (if not honest) situation to try and get an edge in a competitive (not cooperative) environment.



v impressed with how much OP can do, as a blind person.


Think about all the distractions and mind games you avoid by closing your eyes. I have been very impressed by the blind folks I've been in contact with on a recent project, thought to maybe try to experience some days emulating the constraint: only read pure text, only use the keyboard, when consuming entertainment only listen do not look...


The greatest trick MEGACORP had, was making software engineers think they are being replaced by AI, even though they were being replaced by cheaper near-/offshore devs working remotely.


My gut feeling is that the offshore workers are going to be replaced by AI first. They have a reputation of bad quality work unfortunately, and it’s hard to know ahead of time who is good and who isn’t because there aren’t many connections (connections is how people are going to be hired exclusively moving forward IMO).


Offshore workers and generative AI have a lot in common. Little formal training. Lots of book smarts but zero context. Performs ok at well specified tasks, extremely poorly otherwise. Cannot understand human-written design documents with enough nuance. No aesthetic design sense. And finally their undisputed ability to pump out high volumes of code (including high volumes of garbage).

The only difference is that LLMs have a deeper and wider understanding of English, no time zone barriers, and nearly instant response time. I find it hard to picture a world where AI doesn't decimate these types of low-skill software jobs.



Maybe because off-shoring is so prevalent most of the code nowadays is written off-shore and so that is the code that went into training LLMs and that code then biases LLMs to write code like an off-shore engineer.

The solution is obvious, we need to employ the best and brightest minds, put them in spaces where all their needs are met and they are protected from bureacracy, off the rails stakeholders and PMs, so that they can write the cleanest best Carmackian code there can be. Then after that generation of code is written, we use it as training data for the next generation LLM.



The thing is that now a lot of them have been moved onshore and into management levels at a lot of US-based companies. From what I've seen, not much better at those aspects than they are at coding. Hard to know if they'll cut loose everyone on the ladder below them or want to keep a large headcount below them.


> They have a reputation of bad quality

That is an extreme understatement.



I claim that by the end of the year, all VC jobs will be replaced by AI. But I don't know why, my claim is not taken seriously or not very popular !


Myth BUSTED - AI cant do coke or ketamine with founders and has no value in your use case


Marc Andreessen would beg to differ :)

On a recent a16z podcast, Andreessen said:

“It's possible that [being a VC] is quite literally timeless, and when the AIs are doing everything else, that may be one of the last remaining fields that people are still doing."

Interestingly, his justification is not that VCs are measurably good at what they do, but rather that they appear to be so bad at what they do:

“Every great venture capitalist in the last 70 years has missed most of the great companies of his generation... if it was a science, you could eventually dial it in and have somebody who gets 8 out of 10 [right]. There's an intangibility to it, there's a taste aspect, the human relationship aspect, the psychology — by the way a lot of it is psychological analysis."

The podcast in question: https://youtu.be/qpBDB2NjaWY

(Personally, I’m not quite sure he actually believes this - but watching him is a certain kind of masterclass in using spicy takes to generate publicity / awareness / buzz. And by talking about him I’m participating in his clever scheme.)



Even his justification for why AI can't become a VC sounds like you could just go by random chance and have the same chance at success which means even the personal touch he is trying to advocate is useless. A monkey could do his job.


That sounds more like him trying to justify all of the possible harms to society as a whole to his peers.

"It'll screw everyone else, but we'll be okay, so..."



Ignore celebrities.


The way I’d read that take is that being a “good” VC is about having enough money to spread around and enough networking connections to generate the right leads. After that pretty much any idiot can do the job.

Tldr AI can replace labor but not capital. More news at 11.



Well it is capital. It's just different capital.


Rogan is a red flag, I’ve seen the kind of content he platforms and the audience that consumes it.


I don't think anyone was taking this idea any more seriously than cryptocurrency replacing the banks?


If those were real claims (don't follow those guys because why on earth would I do this to myself, life as in spending my free time is about completely different matters), why is anybody still taking them seriously?

People bash trump for his momentary brainfarts, yet this is exactly same stuff. Are they really trying to imitate same behavior with similar consequences? Should be ignored by both devs and investors alike (or invested via shorts or similar reverse tools). Real progress looks differently.



A large number of people do not take Zuckerberg or Altman seriously and do bash them, but there is also a contingent that do. This is similar to Trump; about 1/3 of America listens to him and think he’s talking sense. Note that these comments were made on Joe Rogan’s show, apparently. I’ll leave you to consider what sort of audience Rogan appeals to.


[dead]



> Every day, instead of picking up where you left off, you need to re-train the AI assistant. Granted, you could maintain an ever-changing set of training prompts, but this adds an extra development layer to the project.

If you aren't actually having the LLM write short term memory files/using a feature in practice, why should I believe you to be qualified to speak on how well the feature actually works in practice?

To be clear, this isn't a comment on the feasibility of bold claims made by people with a significant financial interest in those claims.



This article is off the rails in a way. Yeah we all know about how LLMs hallucinate and how that’s an impossible hurdle to get over (currently).

But AI at its current level, pre ChatGPT was in itself an even MORE off the rails claim then anything in this article. Like what AI can do today is unthinkable to the point where you can be sent to the mental ward of a hospital if you made a claim for predicting what AI can do currently. The Turing test was leap frogged and everybody just complains about AI is garbage and then they moved the goal posts.

It’s not that the claims are wildly overblown. It’s only overblown a little and not by an overly bullshit amount.

It’s that the hype is pervasive. Like we see this hype everywhere and we are riding along with it. AI has infiltrated our lives so deeply that we are just no longer impressed so we get all kinds of people saying AI is overblown when really it’s not that overblown at all. AI agents that code for us? We are 50 percent of the way there. It’s the last 50 percent that’s brutally hard to make happen but it’s not completely out of this world for a company to try to jump that gap in a year. We’ve made incremental progress.

If Elon invented a space faring vehicle that had a light speed drive and was available for anyone to purchase and fly for 5$ then I guarantee you hype will blow up to the point where people get sick of it just like AI.

People will be talking about how space travel and light speed drives are overblown. I’m not impressed that it still takes 4 light years to get to Alpha Centauri are you kidding me?







Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



Search:
联系我们 contact @ memedata.com