评论
(comments)

原始链接: https://news.ycombinator.com/item?id=37870437

总结关于人工智能及其相关营销的常见主题: 1. 将“AI”一词用于描述产品或服务并不一定意味着它真正智能或有价值。在某些情况下,仅仅在产品中添加“AI”一词是为了增加噱头或吸引潜在买家。这种策略过于简化了复杂的概念,导致目标受众对什么是真正的人工智能与营销炒作之间的混淆。 2. 许多这些批评强调了强调产品功能或特性而不是讨论实际客户利益的问题。虽然了解产品的基本原理可能有助于向行业专家传达技术专长,但通常来说,这样的讨论无法展示产品如何满足客户需求。 3. 另一个反复出现的批评涉及将AI能力应用于产品需要额外的努力和资源,这比通过传统方法实现类似结果更为困难。因此,以这种方式实施AI往往导致开发者和客户都获得不佳的结果。 总的来说,这些批评强调了在产品和服务方面提供清晰度和透明度的重要性,而不仅仅是将焦点放在诸如“AI”之类的噱头上,以便进行市场营销。通过更加重视教育客户关于将AI融入产品或服务的价值主张,公司可以在目标受众眼中提高可信度和可靠性,最终实现提高保留率和总收入增长。

相关文章

原文
Hacker News new | past | comments | ask | show | jobs | submit login
Every app that adds AI looks like this (botharetrue.substack.com)
580 points by vitabenes 5 days ago | hide | past | favorite | 209 comments










My (briefly viral) personal take: https://meat-gpt.sonnet.io

- a huge chunk of my traffic comes from people who believe this is a real project, because AI tool catalogues keep hallucinating pitches for my AI PRODUCT

- this almost won an award but I lost to a site with 3d rotating sandwiches

- the silver lining: I met a guy who pivoted from startups to making beef jerky. We are a small, exclusive community known to some as the meatverse.

I also made https://butter.sonnet.io and people offered to pay me for it.

Perhaps I should monetise my Medieval Content Farm with those native ads in blackletter: https://tidings.potato.horse



Ironically, Butter is both really really cool and exactly what generative AI is sustainably good for: reading large amounts of text (in this case video transcripts) for you, the user, and extracting structured or unstructured insights that lead to a more enriched experience. I, too, would pay for it!

https://github.com/paprikka/butter/blob/main/src/watcher/det...

But, to OP's point (and possibly meat-gpt's point?) - any app that asks you, the user, to provide it with a whole bunch of unstructured text in real time just to get going... is annoying at best, and disconnected from our primal brains' urges to point at cool things and see cool things change, and share those cool things with real meat-space people. AI is great when it can facilitate less typing, less reading, less irrelevant ad-watching, and more time spent seeing relevant things to say "ooooooh" about.

(Also, thank you for making me want banana bread!)



I, too, was sick of being asked to write prompts, so I made https://nicer.email/


Great projects, amazing work.

I'm mostly here though to say thanks for the reminder about `.horse`, which I haven't seen since my friends and I would go on domain-buying blitzes and host static pages with repeating backgrounds of random images on them, in college.



hehe do not underestimate the power of edibles and rash decisions!

In all seriousness though, I like splitting my projects between two domains: .sonnet.io for the more serious stuff .potato.horse for anything that I might not want to put on my CV.

(it's a good tradeoff between keeping one domain for everything and buying vanity domains en masse - which I used to be guilty of)



Loving your UIs for these projects, they have a nice minimal aesthetic. What's your tech stack if you don't mind me asking, seems Svelte for frontend?


> Perhaps I should monetise my Medieval Content Farm with those native ads in blackletter: https://tidings.potato.horse

I wanted to like that, but their font is fake. They don't even use the proper ſ. https://en.wikipedia.org/wiki/Long_s



Thank you for that comment. I'm a bit of a typography nerd (and uſed to tranſlate manuſcripts, albeit in middle-Perſian, not Latin) and I DID conſider using it. β≠ß, right?


Agreed.

It's a shame that the state of ligatures for Fraktur and similar is so abysmal, even for TeX. Or at least it was the last time I checked.



> - this almost won an award but I lost to a site with 3d rotating sandwiches

meat-gpt is awesome, but now I want to see what won out over it, have a link by chance?



https://rotatingsandwiches.com glorious and so forward-looking.


> https://meat-gpt.sonnet.io/

Now I'm nostalgic for the early 2000s :)



check out mmm.page

I also post about this sort of stuff occasionally in my TIL notes: untested.sonnet.io (see the note titled "40" and 41).



I think this is an incredibly interesting piece of art. I want to see more websites like this. Thank you for making it, I really enjoyed it


Does butter work on FF? Love these sites!


IIRC it was FF compatible, but I haven't tried using it with FF.

Also, it's buggy and doesn't always work. I could be waaaay better with some fine-tuning. Still feels magical when it works.



Sandwich site?


not even one sandwich, but many: https://rotatingsandwiches.com


It so reminds me of Douglas Adam's Electric Monk from Dirk Gently:

"The Electric Monk was a labor-saving device, like a dishwasher or a video recorder. Dishwashers washed tedious dishes for you, thus saving you the bother of washing them yourself, video recorders watched tedious television for you, thus saving you the bother of looking at it yourself; Electric Monks believed things for you, thus saving you what was becoming an increasingly onerous task, that of believing all the things the world expected you to believe.

Unfortunately this Electric Monk had developed a fault, and had started to believe all kinds of things, more or less at random. It was even beginning to believe things they'd have difficulty believing in Salt Lake City. It had never heard of Salt Lake City, of course. Nor had it ever heard of a quingigillion, which was roughly the number of miles between this valley and the Great Salt Lake of Utah."



When I read

> Try Headspace AI. where one of our robots will meditate FOR YOU and then tell you all the lessons it learned.”

I also thought of the electric monk, although for some reason I remembered it as being something that watched television for you.



Hype trains exist in tech the same way they exist in the seedy world of multi-level-marketing (which unfortunately due to some past clients I have a lot of experience with).

It doesn't matter _what_ you're selling. All that matters is that the hype train passes enough people that you can cash out your "investment".

Each investment round is the cash out for previous ones, and the last round is the IPO, where ignorant (no negative connotation, just unknowing) public is left holding the bag of air.

And like the MLM pyramid schemes, sometimes the product or service does have some value to some people. But it gets sold to anyone and everyone, most of whom have no use for it.

Most products or services don't need AI. But for sure, you needed to say BLOCKCHAIN somewhere in your business plan a few years ago, and now you need AI.

Eventually maybe we should just embrace hype trains and accept that people like to get excited about things they don't understand, or things that are imaginary. Just go ahead and sell them a dream. Put AI in your widget and make more money. It's ok. But don't believe your own hype.



nailed it.

i have been part of 3 hype-trains - dot-com, cloud, and now ai. missed w3/crypto.

but, i havent such hype-trains in other parts of us/world.



There was the SoLoMo (Social, Local, Mobile) hype train of the late 2000s which indeed came to take over the world as promised. Also the B2B SaaS hype train that Basecamp started in the mid 2010s has taken over as well


The first two moved past the hype and became commonplace, if not mundane. I guess in a decade or two there will be things using web3 principles in a boring way.


You don’t speak Japanese / hang out on Japanese internet?


No. But my AI will hang out on the Japanese internet for you.


no, but in other parts of the world, there is no sense that you will imminently a) change the world with some doodad gadget or b) be a millionaire by picking up a startup lottery ticket.


how do you know that if you don't speak other languages and hang out in their spaces?


I work for a Fortune 500 company. One of our senior technology execs was just listed on Business Insider's "Top 100 people in AI". Our company has no products that use AI. The closest we have come to actually doing anything with AI is a tiny trial of GitHub Copilot.


15 years ago I worked with a guy on NGO projets in Africa.

The projects were either prototypes, concepts, in testing, or small scales. Plus he was honestly the type of manager that was better at talking than doing so I wouldn't say he was much involved in them apart from claiming he was doing good.

Well, it worked because he was in the top 100 people of Time magazine the year after for "saving millions of children".

That was my wake up moment about the medias.



It's pretty interesting to me to see these patterns of who gets credit and who doesn't. My ex-wife got off of active duty and joined the national guard to go to college nearly 20 years ago before she got back in as an officer through ROTC. While she was in the guard, she deployed with a civil affairs unit to Djibouti, which we have always maintained a permanent presence in to secure shipping lanes for oil. She was their supply sergeant and she identified a problem with the local water supply and got them all the equipment they needed to fix it, and in the process probably legitimately saved at least thousands of lives. She got a bronze star for it, but certainly no magazines will ever have heard of her.

It's even more interesting that she got branch-detailed to the field artillery when she got back in, and I'm pretty sure she became the first female to ever command a combat unit in the Army when she filled in as XO of a forward line battery and they lost their permanent commander. But you'll never see her in a history book or read about her on Wikipedia because it was still illegal at the time for women to serve in combat units at all, and officially she was on the books as part of the brigade headquarters company, only filling in because they were severely understaffed.

I try to keep her in mind when I feel like I'm being slighted at work and not getting sufficient recognition for accomplishments as I earn a salary triple what she gets as a Lieutenant Colonel now.



Now imagine what it is like when they have someone that they dislike.


> Plus he was honestly the type of manager that was better at talking than doing

Well yeah, who else would the media think to profile?

When the media profiles OpenAI, most of the coverage will be on the CEO, who is more of a 'serial entrepreneur' than an AI specialist.



Every now and again, I think I'm in danger of becoming too cynical. It seems that I still have some way to go before my naïvety is properly calibrated to the real world.


I'm sure this would out your company, but if you can say, I'm curious what the blurb said about why they were included in the list. I mean, I'm sure the real reason is "a PR person with good contacts at Business Insider", but I think it would be illuminating to know what the BS reason they wrote up is.


Spoiler: all such "Top N people in X" lists are fake and somehow serve someone's business model.


I thought all those lists are marketing exercises, no?


The "Absurd, honest comedy delivered twice a weekish through the vulnerable personal essays; type your email...!" popup showed up, coincidentally, superimposed over the second image, the "what are you stuck on" prompt. It took me a bit to figure out it wasn't part of the pic! At least the hypothetical AI page made me click something before asking for personal information.


Is that what that says? I have noticed that every substack page pops up some stupid modal as you scroll down the page and, for the most part, it's made me avoid them when I can because it's just obnoxious. I'm contemplating a Chrome Extension or something that will auto close them but for now I just find that most of the time it's not worth clicking anyway.

I just close it immediately without reading it like the internet has trained me to do with literally every modal. You want my attention? Put some call to action in the text I'm reading. A modal popping up mid-content asking for my email is just a good way to signal to me that you don't value your content and that you just wrote it so you can get my information. And sorry to those folks, I don't even read it.



> You want my attention? Put some call to action in the text I'm reading.

Please no. In-article advertisements and link blocks are a plague on online publishing.

I'm sure Substack will eventually do this however. In the early days they didn't have a nagging modal. Over time they'll go full Medium.com and straight up hide free articles behind a signup/subscribe.



Ooof ok I hadn't thought that through. I guess I was more thinking at the end of the text like I think The Guardian does just opting not for a popup but yeah honestly you're right. Don't put ads in my content either.


Can you just block that element with your adblocker?

FWIW, that’s just a Substack feature. It’s not like the authors all got together and decided they all wanted to make a modal pop up on scroll and interrupt your reading.



I'm not sure I feel like usually a "well-made" site doing this will vary some identifier or something so that you can't just wholesale block them all.

And that's fair, I understand that but they do still choose to publish there instead of their own blog. My opinion on that site is kind of agnostic of the owner of the problem it's more just that I'm not going somewhere knowing that is going to happen when I do.



I was curious about this, and it seems like people have already created drop-in cosmetic uBO filters for Substack: https://ronitray.xyz/writing/clean-substack/

I haven't really looked, but I suspect some of the built-in uBO filters might work too.



Same for me as well. I thought the author was making a self-aware joke about people breathlessly self-promoting on substack, but nope.


I need to train AI to automatically click the "continue reading" link.


I had the same issue.


The problem is that companies are throwing tons of garbage at the wall to see what sticks. And somehow, investors are rewarding them for this.

Undoubtedly LLMs can be used to create useful features, but first companies will need to realize that

1) It will take more effort than throwing up a LLM front end for your product.

2) It's probably not going to be a flashy, "disruptive" feature. Something that makes user experience just 5% easier/more efficient is huge, but AI influencers won't breathlessly shill your feature if that's "all" that it does.

3) You have to think about problems to solve before thinking about solutions.



> The problem is that companies are throwing tons of garbage at the wall to see what sticks. And somehow, investors are rewarding them for this.

I'd turn that around. Investors are throwing tons of money at AI to see what sticks, and journalists are throwing tons of attention at AI to see what sticks. Companies are realizing how easy it is to get in on the action.

I just had a talk with the head engineering at a startup that just came out of stealth mode. We talked for over an hour about the work they're doing, the market they're addressing, their technical challenges, and how their technical teams are structured, and AI didn't come up a single time. But if you go to their web site, it's all about "using advanced AI to supercharge your _____."



For companies, this thinking, chase the trend and make it profitable later, only works for a few. They do win big.


For founders, just getting lots of investment and burning through the venture capital can 'work'.

For employees, that can also be fun. Just make sure you get paid in real money, not in startup equity.



> For founders, just getting lots of investment and burning through the venture capital can 'work'

Not really in the long run.

>For employees, that can also be fun. Just make sure you get paid in real money, not in startup equity.

Please work for the founder above.



> Not really in the long run.

Why not? It worked for Adam Neumann. He set fire to tens of billions of investor money, and walked away with a few billions for himself. On a smaller scale, you can repeat this.

> Please work for the founder above.

Not sure what you mean?



It depends on how much it warps your business. As far as I could tell, the company I talked to had a couple of prominent front-end features that were AI-driven, nothing more.


But isnt the whole startup model is throwing the garbage at the wall and seeing what sticks? It's about quick testing. Even the word venture in venture capital is fitting?

I don't see any other way that it can work tbh. Do you?



Sure, but it's not just startups.

How many Fortune 500 companies have started mentioning AI on earnings calls because it's what Wall Street wants to hear?

How many companies are having internal "AI" hackathons that put the solution over the problem? (Am I the only one who has had to suffer through this?)



Optimising for dumb money trading on earnings call transcripts.


I think there is a difference between "do people want a different kind of search engine" and "I'll wrap some marketing fluff around a prompt box like 500,000 other startups"


I don't think that's fair because people have to start somewhere.

To me a couple of useful features and all the sudden something becomes a useful niche product.

Another AI writer probably not so great, let it integrate to wordpress or some other CMS maybe people will pay for it.

There's probably alot of niches around some set of integrations and user interaction that save a lot of copy and pasting.



Yeah, there's no problem with that. I mean, being able to throw a bunch of garbage at the wall to see what sticks is arguably the reason why capitalism tends to economically outperform feudal and planned economies.

But that creates a sort of "caveat emptor" situation. A lot of everyday consumers benefit from being constantly reminded that this is how things work, because humans are very susceptible to some bad heuristics. A very, very common line of reasoning that one sees in discussions about tech, even on HN, is effectively, "A is a piece of garbage that someone threw at the wall, and it stuck. B is also a piece of garbage. Therefore, when we throw B at the wall it will stick."



I don't think that capitalism inherently implies "throw a bunch of garbage around". It's true that it may not be a priori possible to decide which way is the best - that's why competition is important - but that doesn't mean that everyone should purposefully run into a garbage direction just because of garbage trends and because the same few investors are funding all big companies.

It effectively turns the problem of finding good solutions to whatever problem (and often there's not even a problem that anyone can point to) into a brute-force search. I don't think that's efficient.



> I don't think that's efficient.

Uh oh, sounds like someone wants a command economy!

No, honestly that's exactly what you're saying "You shouldn't do X because I don't think it will have a potential payoff". Now in some particular cases where there are particular social or environmental harms we may pass laws to limit said behavior. After that point businesses and individuals have the right to waste money on whatever they like.

Note, this is why it's also important to have government funded research on things that may not be profitable. While industry is screwing around trying to squeeze and extra penny out of a dime, research projects, while risky can find hidden quarters.



> Uh oh, sounds like someone wants a command economy!

Is it possible to get your point across without accusing me of things I've never implied?

I didn't say that we should pass laws to prevent people from throwing shit at the wall. But I think that we live in a collective delusion if we think that that's the right way to make progress.



I don't think they were wrong about you implying it, though.

If you don't want people to assume you're proposing some form of command economy, then it's on you to concretely articulate how you envision avoiding the chaos of capitalism without resorting to the authoritarianism of a command economy.



If in your opinion, the only way to effect societal change is to pass laws, then that's on you.


I didn't really imply any sort of teleological character to what I was saying.

I do think that there's a tendency for this kind of thing to happen at a much larger scale in capitalist economies, but I don't think that's intentional. It's just an inevitable consequence of actual humans operating in such a system. That also doesn't mean that any one person consciously believes what they're doing is garbage - I'm sure they're all confident in their idea.

But it's really, really difficult for me to perceive a practical distinction between millions of people all operating under widely varying and often mutually contradictory beliefs, and simple randomness. Maybe that's my statistical training affecting how I view the world, though.



> The problem is that companies are throwing tons of garbage at the wall to see what sticks. And somehow, investors are rewarding them for this.

Happens in all hype cycles. See metaverses a year or so ago, a couple of rounds of crypto nonsense before that, and before _that_, why it's our old friend AI again (there was a brief period when everything was pretending to be an AI chatbot to get the VC moneys; that last AI bubble more or less died with the release of Microsoft Tay).



I thought we were out of the Dumb Money stage and in the Profitability and Frugality one.

I guess there is more money to torch. Oookay.



The one thing the world will never be short of is dumb, trend-chasing money.


If there's a gold mine, and everyone is scrambling to claim a share of it, how can I predict who will achieve the greatest success? The reality is that I can't. Since the outcome is uncertain, the strategy is to invest in anyone actively engaged at the gold mine.

This approach is obviously simplistic and loosely-speaking. In practice the network effect would be precedent where I would favor investing in individuals I'm acquainted with, who possess a track record of experiences and competence in generating wealth.

So the point is, there is nothing wrong with actively engaging at the goldmine and be rewarded for it by investors. These are motivated by clear rationales.



We need to mine for the limited amount of gold so we can be wealthy, have higher purchasing power over other people and to get our hands on limited resources... wait, what problem are we trying to solve again? Isn't the problem with limited food, shelter and basic necessities?


Oh my god, this is exactly how I feel about AI hype.

I've been in the "AI" space for quite awhile. The hype hurts, and it sets completely unrealistic expectations for both utility of the tools as well as the cost of the tools.



"Gimme a high six" is a nice joke considering that neural networks are notoriously bad at generating decent hands.


The future crystal ball picture with the three fingers really seals the deal.

I feel the same way as the author. Adding “AI,” to an app is a great way of signalling to me that you’re a schlocky salesperson who can’t wait to sell a lemon so you can make your alimony payments this month.

Machine learning is fine and useful. “Stick an LLM on it,” is painfully obtuse. Now that every tech product and service is doing it it’s like the crypto craze all over again.



It's not missing a finger? If you're referring to the left-hand (for the viewer) pinky, it's curled and partially obscured by bijoux and by the ring finger, but the nail of that finger is visible pointing inwards toward the crystal ball.


I see it now that you mention it! It looks a bit strange to me still. I mistakenly wrote it off as another example of failed AI hands.


I laughed too, but if you've seen some of the latest SDXL models on Civitai, hands and fingers have been fixed. Some of the recent generated images are pretty realistic.


I also caught that :) Nailed it.


What really amazes me is that, with all the money thrown at AI scams, the small group of "hero" generative AI tool maintainers on GitHub barely gets any funding.

There are a few exceptions (like TheBloke getting funded for their quants, and GGML getting some seed money)... But stuff like InvokeAI, VoltaMl, koboldcpp/artbot and the AI Horde, Petals, the ControlNet dev and such should all be swimming in cash, but instead they are burnt out with basically nothing. And I'm sure there many more I don't know about.



Wasn't that the same outcome of the blockchain hype? Crypto-bros ran of with the money and a few developers getting little or no funding plucked away on GitHub.

The main difference is that I see way fewer people defending the A.I. hype and people are generally much more critical and realistic about the application of LLMs.



These tools are actually great and useful though, right now. I wasn't as into crypto, but I mostly remember the tools being "neat right now, and very useful in the future."

Stable Diffusion in particular has a huge community using it clustered around CivitAI, its just flying under the radar.



> Stable Diffusion in particular has a huge community using it clustered around CivitAI

While CivitAI has some monetization and may itself be doing well between that and whatever capital it has available to burn befor eit needs to be self-supporting, I get the sense that most of the creators tuning models and sharing there are scrambling for support, so I'm not sure that's really a refutation.



No, its not a sustainable business at all.

But the population is huge.

And most of the finetuners/mergerers are indeed not making much money, but (IMO) they are closer to "power users" than open source code maintainers, and shouldn't necessarily be raking money in like they are a lynchpin.



Don't tell the normies this! They're all shockingly easy to talk to or work with right now.

Another reason why they are skirting under the radar is that many of those kind of folks are, shall we say, somewhat unsavory. Consider Automatic1111 is also famous for racist rimworld mods, actively credits 4chan in the developers, and uses ho-chi-min as a github profile picture, meaning he's either Vietnamese or a deranged communist (or maybe even both!)

The reality is that a ton of AI innovation is happening on discord by people with anime profile pictures whose entire reason for their obsession with this stuff comes down to chasing dopamine rushes - and hardly anyone outside of the space knows about it!



First of all, sd-web-ui is not an example I was thinking of because it has a troublesome licensing history, the codebase is a hot mess, and it is insanely popular, among other things. It has no bushiness getting VC funding for many reasons, lol. But:

> also famous for racist rimworld mods

Not at all. Buzzy social media reported this, but... uh, the Rimworld modding community is hard to explain, but Automatic's "white only" mod isn't really white only, and isn't even a blip on the seedy modding radar. His modding style kinda resembles the GitHub SD project though.

> actively credits 4chan in the developers

So what?

> ho-chi-min as a github profile picture

It was his Steam profile before sd-web-ui, and so what if he is Vietnamese?

> The reality is that a ton of AI innovation is happening on discord by people with anime profile pictures whose entire reason for their obsession with this stuff comes down to chasing dopamine rushes - and hardly anyone outside of the space knows about it!

Yeah this is true, lol. Its not all fappers and anime waifu makers though, the "discord" push behind the UIs and augmentations is pretty diverse.



One would have thought that people within tech and all this "entrepreneurialism" would have a bit more of a rational and common sense, but it really seems that you can actually sell shit to everyone. I'm sad to see this shitfluencers on Twitter gaining so much traction.

But it is a great filter to whom not to listen in this virtual flood of opinions and advices.



While I'm sure a lot of this is tech folks, I think part of the problem here is that it enabled any rando sleezeball off the street to be like "Hey I made an AI product. Look!"

I've personally known a few of these folks at this point and yeah they don't actually even understand what they're doing half the time they're just watching youtube tutorials and setting things up not caring/realizing/whatever that what they're calling an "AI Product" is actually just a bot they connected to some API that they fed a really brittle system prompt into. But everyone should be impressed by them because they managed to ~follow a youtube tutorial~ set it up! (Sorry, kind of PTSD from one in particular)



I know exactly what you mean, but this kind of people were sold on a particular idea of what being a founder/entrepreneuer is. Just grind hard and make products and at one point you will make it. They see emerging trends as opportunities without having a knowledge about them, and if they are lucky, they will convince some people to actually buy that.

But again, one would think that venture capitalists who are throwing apparently, hundreds of millions of dollars into it, would have an eye to filter out this kind of characters.



Evaluating a business is tough.

VC is like that because many of them missed many boats of very profitable startups in 2001 thru 2010, where "stupid business trick, now with software!" was almost always an OK bet.

AI is the new dot com, which means we are at least one more bubble away from mass usefulness



> But again, one would think that venture capitalists who are throwing apparently, hundreds of millions of dollars into it, would have an eye to filter out this kind of characters.

Yeah, this is the most worrying part to me. Especially after many just got burned by crypto.

I think some of the "filtering" is laundered and disrupted by VCs giving money to legitimate businesses who turn around and spend it on very questionable AI products.



VC is a tricky business. Most operating at scale see more variety in their deal flow than they could hope to understand in any deep way. This is one of the reasons team & traction matters so much more in early funding rounds (1); evaluating product is difficult even for engineers with domain experience (2). Much more so for a finance major with domain experience in private equity & TED Talks. Hype trains give VCs three ways to win: the hype proves real; the hype lasts long enough to WeWork an exit; a portfolio that includes prominent hype sector start-ups helps ensure the VCs next fund is fully subscribed.

Add to that the recognition that all going-somewhere trains were hype trains at one point in time and it’s silly to expect the latest round of fraud and tulip-mania to teach VCs lasting lessons.

1. I’m told biotech/pharma VCs are more product versed and focused.

2. “When this product hockey sticks there won’t be time to rebrand. That’s why we have to change to Meta now. Or maybe X. Whichever has the better bespoke font.”



I remember a time some 25 years ago when shareware developers, years after Winamp, discovered that you could leave the rectangular UIs behind and make "skins" for their software. For several years everybody was adding skins for good or for bad reasons. It has become an indicator for me: if a new major version of a software has "Skins!!!" as the first item of the changelog, you could assume real innovation has stopped and you could reasonably expect its downfall in the next couple of months or years at most.

In the case of AI it's a bit different for existing software and probably more of a hype train fueled by marketing departments than true lack of ideas. But the readiness to jump on these trains should be alarming, especially when long-existing features are suddenly being replaced by "AI". I hope this will end soon and after sifting through fad ideas we'll be left with few, but really usable products.



This article shows the appropriate amount of disrespect.

> They don’t call it AI because they are not children.

Word. But nobody will heed their appeal to “stop it”. The hype is just too sweet, and too many people want to believe in miracles.



I for one hope the hype fever pitch dies down a little, because I love AI. Love LLMs, diffusion models, the various audio transformers. Live the ecosystem, love the PoCs, I just love it. I use some sort ot gen AI model every day to great avail. I can do without the marketing hype to be sure, but to me this technology is still exciting, even after working with it for the last 3+ years, it's just sci-fi AF.


Yeah this is the same take I have! This field is currently both genuinely amazing and very irritatingly hype-y.

This is just the normal hipster take though; we're annoyed that something we love has "gone mainstream" and "sold out" :)



AI gold rush is the new underpants gnomes: https://imgur.com/a/x9isuwO

Nobody's got any idea how that's actually going to turn into a profit but everyone's busy stealing underpants.



More like the blockchain fad. Just as anybody can fork Ethereum to create their own shitcoin, anybody can throw a UI layer over Stable Diffusion and pat themselves on the back for their innovation.


So nobody's gonna comment that the guy in the painting is getting pickpocketed while watching the magic trick ? This is the real gem from this rant.


omg. I wrote the original piece and cannot believe I missed this


Off topic: Since AI has become an overloaded term and I like to build AI for video games (finite state machines, behavior trees, utility functions, etc) does anyone have a suggestion on what term to migrate to so when I say “I like writing AI for games” the listener knows what I’m referencing?


I don't think this is off topic, and I'm curious if anyone has a good answer. I see the same issue with people who work on "old school" machine learning techniques for the normal useful things that aren't "generative". Like, my company does a lot of time series forecasting, and sure, it's fine to market that as "AI", but that terminology has become a lot less useful for serious discussions of what we do, like with potential candidates. I guess just saying "ML" works fine, even though "generative AI" is also ML, but people don't tend to use that terminology for it much.


I use "GOFAI" for that sort of thing: "Good Old Fashioned AI".


intelligent agents (IA) (for video games)?


This is my favorite. Thanks.


The speed with which AI has become the new hot thing everyone must have is dizzying to me. It seems like it happened over a span of weeks after ChatGPT went live.

I wonder how much faster we can go. Can we have an entire tech hype bubble in one 24 hour period?



> Can we have an entire tech hype bubble in one 24 hour period?

Those happen when person [X] says [controversial thing that's also a huge, earth-shattering claim] prior to [product launch]



This article is lazy and contains very little value. I'm convinced the people commenting here have not read it.


Tend to agree about the article's content (though there are a couple good jokes), but what people are commenting on here is the thesis of the article, that the point we're at in the "generative AI" hype cycle is noisily creating a lot of useless crap.


Cynicism is a cheap way to appear smart and dumb people lap it up. So I'm not surprised it's upvoted to the top of HN.


This is true, but the article is also right about the current state of AI apps. I can't wait for the hype to die down, for the nonsense to die out, and for the signal to noise ratio to shoot up. There is tons of real signal in the space! Pure cynicism on it is not warranted. But it's also reasonable to point out how much noise there is at the moment.


I'm not convinced he read his own materials, when he describes a painting - which is famous for showing pickpocketing of people distracted by a magic trick - as

> This dude is looking up like ‘no way he did that without god's help’

No, the dude is looking up so as to distract people and avoid signaling what he's doing, while he steals the wallet of the rube bending over in front of him... (Which is extremely on point and topical for OP's thesis, yet apparently does not realize it.)



It may have no content, but it is very easy to read and confirms the pre-existing biases of nerdy contrarians. It will go far here.


Is it possible that your own bias towards AI makes you somewhat blind to the scale of garbage AI tools being shoved in people's faces daily?


Anything is possible, but it doesn't change the veracity of my comment.


It's a criticism of the marketing behind common B2B AI tools. What's lazy about it? You don't see an enormous mismatch between the promises and reality?


Probably the section where it uses a hand drawn paint.exe example as what AI commonly outputs. Just spitballing.


maybe i'm dumb, but i don't see any place in my life right now where a bot (i refuse to call LLMs AI. they are not intelligent.) could help me.

chatgpt? well, i used for some funny things (like writing a poem about an amazing sandwich my wife made), but i never think "let me use this" whenever i have a problem.

image generation? i mean, maybe for memes? i tried bing-image-generator while high and had some laughs, but for the life of me i cannot see myself using it in any other way.

i also tried the notion ai stuff, but honestly, i just prefer writing everything myself, since writing is a skill that needs improvement and you can only improve by actually doing it.



As a programmer, it ended up being useful for a few things in the beginning, but I don't even use it anymore. Maybe every other week to reformat some text or make some boilerplate code.


GPT is really good for finding insertion points into large-breadth topics where you don't really know where to begin. I've found great joy in asking it particle physics questions. I don't know the math to google the right terms, but it's been really good about taking my layperson questions and translating it into answers that are digestible.


I’ve used it as a software engineer most days ever since week 2 when I first learned about chatgpt. It is incredible at making me more efficient by answering dumb questions such as what are edge functions, and getting unstuck on new topics such as how oath2 works in youtube apis because i’m forced to use it for a server to server app.


This with Bing so it searches for articles and reads and summarizes them has been great! I mean, admittedly it'd probably be better if it used a better search engine but at least with this workflow you know it's less likely to regurgitate garbage.


I've used it to find a couple of different python libraries. Asked is questions about top level ontologies and OWL. it was pretty insightful on the ontology stuff.


I've found LLMs to be very useful for, well, text-based things. I've not found the "bot" implementations useful, but they're better tech for summarization, highlighting important sections (i.e. what sentence of this product review should I show in bold, given the search term), and entity recognition (what are all the products mentioned here).

They are expensive to run, in terms of GPU cycles, but they are noticeably better than the previous models.

It's also hard to constrain them well. If you want 95% accuracy, it takes some tuning work. If you also want to avoid 1% total batshit nonsense (repeat "chicken" 50 times), then you have to check for that. Earlier models were sometimes wrong, but they were not quite so aggressively wrong as the 1% case of LLMs.

That's just my anecdotal experience, but it leaves me both optimistic about applications in the right spaces and worried that people are just shipping something that's OK 75% of the time and calling it a product.



To your last point, LLMs can translate your writing into different styles. Try feeding it one of your writings, then asking it "now make it sassy." It's also good at writing low-effort things like Instagram product blurbs.


I like the other comment somewhere on this post, the real uses will just be small improvements. I used image generation to make a quick favicon for a website, it came out nice enough, better then what I could do. I also thing the semantic search stuff with language as a query is pretty useful. I was able to download my emails, generate embeddings, and write queries for my emails better then what the Gmail search would do.


> All of Google is AI. But you don’t see them bandying it around like a kid with new light up shoes.

I was at Google Cloud Next London yesterday and I hate to disappoint you but _everything_ seemed to be about AI. The keynote was about AI. The decor was all AI generated. Each breakout had to mention AI, to the point where a couple of speakers joked that they _weren't_ going to talk about AI. It was a bit depressing.



I was at Google's recent Generative AI conference and their latest and greatest LLM can't even answer the "Sally has 3 brothers" question, embarrassing.


Every substack post looks like this: 1. look at my emotional take on a hot topic 2. why don't you subscribe to my substack 3. ok i got you to the end? I have something to sell to you


People don't write for free on the internet anymore. If it's free, it's a prelude to an ad of some sort.


I don't believe that's true, my RSS feed has more than I can read. Just that the people behind those blogs aren't desperate for attention or trying to find clients.


>All of Google is AI. But you don’t see them bandying it around like a kid with new light up shoes.

Google Calendar has that stupid icon on a feature for auto-selecting where you'll attend a meeting based on your office schedule. They're just as capable of bandying about their bullshit applications for AI as everyone else.



They even have a whole suite of useless AI add-ons for Google Workspace https://workspace.google.com/blog/product-announcements/duet...

(Well, the email add on is somewhat useful, while the generative AI for illustrations in Slides is outright laughable.)



I believe this is precisely why Apple avoids calling its own large language model features or other neural network-related things "AI". It's a very loaded term, very overhyped, carries a certain stigma, and it's not like that bubble didn't burst in the early 1990s or so.


Kind of ironic that this site bothered me with one of those full screen, must-click-away, overlays trying to get me to subscribe to their newsletter.


is it ironic? that's just how substack works.


I was doing groceries the other day and ended up looking at a new CocaCola flavor said to be generated by AI.

Looks to me like AI is just the new buzzword that replaced crypto and NFT and we will just see more of it for a few months until it calms down.



I saw that too, and it didn't really make sense to me in the sense that food is complicated.

It needs to have the right ingredients in the right amounts to not spoil too quickly, to taste decent, to pass health regulations, etc.

Coca Cola knew exactly what ingredients were available. So what role did the AI play besides being a glorified list.random()? That humans then used as a starting point to turn into an actual drink?



"Generated by dart thrown at wall" doesn't quite have the same ring to it though it is effectively the digital version of that.


> I will pay money for you to not use the word AI anywhere on your app. Make that the premium tier and I will buy it...

No you won't!



Its not exactly reassuring that Google is "using AI" to send scam emails to my inbox and articles that keep repeating the keywords they are ostensibly about to the top of the SERP


All valid points, but then you hit the bottom of the page to sign up for a (drumrolls), writing course.


The only correction I'd make is that a lot of them ask you to create an account, or at least give them your email address, before you get to experience the magic.


Which is most likely how they are making most of their money - by selling email lists.


I am a bit more optimistic or just unfazed by all the AI hype. After seeing these "hype trains" before, it just seems like part of our collective process of figuring out what works. Sure, I think optimizing these cycles to burn less capital would be good but I guess I'm actually on the side of having a bunch of people throwing a bunch of crap against the wall and seeing what sticks. :)


We still don’t know what LLMs are for.

By this I don’t mean LLMs aren’t useful. We have known that there was novel behavior happening as early as GPT-2.

GPT-3 represented a clearly novel transformative technology. But we still didn’t know what to do with it.

The reason for this is that the people who come up with products are generally a different set of people from those who innovate novel ML models.

The latter tend to be PhDs or the extremely mathematically talented.

The former tend to be a mix of product-y software engineers & engineer-y product & strategy people.

There were some products, GitHub Copilot being the breakaway. Knowledge needs time to diffuse, and a market demand is necessary to catalyze that quickly.

Out of exasperation, OpenAI decided to take one of the most common prompting use cases on the GPT-3 beta playground, Q&A, and make a chat product, almost as a technology demonstrator.

Then ChatGPT exploded to hundreds of millions of users.

All hell broke loose. Every Fortune 500 promised a “generative AI” rollout. And of course, like feverish corporate-branded “metaverse” stuff it all sucks.

You can’t “corporate partnership” and “internally accelerate” a technological shift of this magnitude. You can’t hire BCG to do it for you. And you can’t tack a model onto your existing product and call it done.

LLMs and other new foundation models require fundamentally new products. They require new middleware to be deployed like vector databases, RAG frameworks, and agentic systems.

Start ups are starting to crack these problems.

But my fear is that when the bottom drops out of the corporate efforts, the investment attitudes will shift just as there’s the most work to be done.



LLMs have been more useful on the encoder side than the decoder side in my experience. Creating embeddings is useful in all sorts of ways. Specifically, if your business involves any sort of recommendation system, embeddings are useful.

On the decoder side, the use cases are more subtle. Rarely do you want your product to be the raw output of a statistical language model. More often the output is an enabler of other things your business is doing. For example, you can use it to develop “doc queries”, which are queries that a doc might be surfaced under. This can help with cold start issues by supplementing existing doc info.



> We still don’t know what LLMs are for

I have found the LLM concept to be tremendously valuable in illuminating my understanding of the operation of human brains (both mine and others). This mainly arose after reading Wolfram's article, then continued independent thinking along the same lines. Honestly by far the biggest coin dropping moment for me in 40 years, since I first heard John Searle talk.

Note I don't actually use LLMs for anything.



Well, not every one. I made an android locally run AI ESRGAN super resolution app (a year ago) to upscale your photos without uploading them "to the cloud" and it was a normal app with a normal UI.

There was a free and a paid version (the only difference was ads I was planning to run every couple of uses, but at the level of usage there was no point enabling them). I say it was, because Google has likely pulled it from the store for not being upgraded to android 13 before the end of August.

Not that many people were interested (about ~5 paid and 30 new free users per month) for about a year. So I'm not spending lots of time on it (it was me testing if people are interested in more private AI solutions, some are, but not the majority).

Instead we have apps like ones described and they have 1mln+ installs so evidently people want them. If I had to make my living making mobile apps I'd probably make the exact same thing seeing their success.



I think we need to re-think the fundamental reasons for why we use computing technology. There was a time where computers were mainly used to assist humans in handling complex, tedious tasks. Specifically, tasks rooted in computation and organization of data.

Today people expect, because they have been sold the idea that computers should do everything from drawing for them to fulfilling their social needs as a human.

I think it's time to step back and really think about what the role of computers should be and how we as humans use computers.

I remember when using the Internet and digital media were in their early stages. Compared to today there was a very small segment of the population doing those things. You were underground if you were playing video games on the Internet.

Most computers lived in offices doing what they did best: office work.

Maybe we've gone too far in the wrong direction.



It's a new hammer and everyone is trying to make up new nails to hit with it.

Heard a pm say "hey we don't have anything about AI in there, what should we do with it" while reviewing the plan for next quarter.

But hey the good thing is that we'll have to figure out AI solutions for the problems that bad ai generates



I suspect it's going to be like Blockchain for awhile; companies are going to start rebranding themselves as "AI-first" or "AI-powered" just like camera companies [1] and iced tea companies [2] were doing with Blockchain back in 2017.

I guess that's the price of progress; some entity invents/refines a cool piece of tech, people speculate about the future of that tech, and then they wonder why Company X isn't fully utilizing that tech.

[1] https://en.wikipedia.org/wiki/KodakCoin [2] https://en.wikipedia.org/wiki/Long_Blockchain_Corp



Sometime on in this whole cycle and I still find LLM-prompts-as-the-interface extremely annoying. I have to pause and carefully think out the entirety of a task I want to do, I can’t just rely on a button being disabled or a confirmation dialog to realize I’m using a tool wrong. And then it usually does something bizarre and it’s hard to trace why it did that.

LLMs are still insanely impressive for things where I would normally have a free form conversation. (Summarization, expansion, paint an image) but the cases like Microsoft Copilot where using it to change a setting in windows is actually more annoying than just asking where that setting is located.

LLMs are the shittest version of a toolbar to me



Author shared screenshots but didn't mention the color "purple" or the shade of purple that is. Why every AI thing is now purple? Look at Shortwave, the really nice email app, they also turned AI and as a result have turned purple now. Look at their homepage.

Too me it very much looks like when every when was integrating crypto in their tech in some form and they were all using these similar colors and gradients.

Now when I see a site like this it basically turns me off. I really liked Shortwave, it was supposed to be the next Google inbox or better. Now it's AI.



I agree with the fact that there are a ton of low effort shitty companies riding the AI craze, but I think that cat looks pretty cool and it's still very impressive a computer could generate it.


"How we got here" can be summed up in one word: mimetics.

Originality is the rarest thing in this world. Most people operate from a position of fear and competition [1]. The sad truth is that, generally, everybody just does what everybody else does because everybody else is doing it.

[1] https://soundcloud.com/i-am-sovereign/solo-podcast-from-kapi...



Ah but what about which now has a Magic AI Sidebar(tm) which is just a GPT-3 prompt that maybe has been fed a list of your document titles to be slightly more relevant.


I guess we are at that part of the hype-cycle where the initial excitement is turning into doubt.

At this point, the vested interests (ai-vcs, startups, ai-researchers etc) must unleash a new wave of propaganda to keep the faith. eg: sama drops a tweet: "saw AGI yesterday in a dark alley", or musk says "optimus did a cartwheel - whoohoo", or lecunn publishes "chain of thought is all you need" on arxiv.

methinks crypto had more legs than AI - at least the initial adopters made some quick cash out of it.



The problem also trickles to hiring and recruitment. Everyone and their parents are doing GenAI these days with multiple publications on top tier venues. What does this even mean?

Was there a hype cycle before this period on some other niches that almost everyone is doing?

In my life only I have seen one about web apps, then another on crypto. What was the bubble before that? Did everyone who started their career went through the route of hype cycles because the recruiters would call for interview? What are your opinions?



You're not necessarily wrong, but you may also be throwing stones from a glass (Substack) house. Barely made it past the fold before I was nagged with a Substack popup.


Will the real slim shady please stand up?

The current AI buzz is being ripped out by the wannabes - everything's marketing and buzz and being able to distinguish the genuine jumps forward is harder because we have made everything a wrapper around the same models - u have a diffusion and a transformer model - and u make it easy to call them - what do u get - a 100+ diffusion / transformer wrappers with a library of prompts - voila your new AI app / agent.



This is slightly tangential, but has anyone recently used ChatGPT for non-trivial programming tasks where you start typing a precise description of what you need from the system and by the time you finish writing it you realise there's a simple way to do it? It has happened to me several times and it was as if serialising the problem was the hard part, like that famous Einstein quote.


That's just classic rubber duck debugging, where explaining the problem forces you to organize your thoughts. It's a good practice!


50% of my Github Issues posts never end up being submitted because writing it out for the understanding of others helps me find the solution myself.


The most egregious one to me has been CircleCI's "Ask an AI about this error" option on failed builds. Always spits out something irrelevant at best or misleading at worst, and it does so in the typical ChatGPT high school essay voice, complete with superfluous intro and conclusion paragraphs.

"Atlassian Intelligence" may be just as bad, though...



I wonder how differentiating 'AI Wrappers' can really be given the power of GPT4? There's definitely some use cases I've thought of w/ finetuning for very niche programming languages (Source Hammer from Counterstrike for example), but in my everyday use I haven't found anything that beats GPT4 + proper prompting.


I think this is better than GPT4 for storywriting/rp, especially with a custom grammar restriction and custom sampling parameters:

https://huggingface.co/Sao10K/Euryale-1.3-L2-70B-GGUF

Mistral 7B finetunes are pretty amazing as well.

Not sure about the code generation side of things. Llama was actually really bad at this relative to other tasks, and I hear even 34B is pretty mediocre, but there are some more obscure foundational programming models I have not tried.



> All of Google is AI. But you don’t see them bandying it around like a kid with new light up shoes.

From a few months ago: https://www.youtube.com/watch?v=-P-ein58laA



I write and sell software that uses a genetic algorithm to optimize seating layouts for weddings and event, but I have so far resisted using the dreaded "AI" anywhere on my site.

At this point anything with an 'if' statement in it is "AI" as far as marketing people are concerned.



Reminds me of the old Bootstrap days


The cycle repeats ad infinitum. Flip phones, where some slid up, others folded out. Some had iTunes but were dogshit otherwise. Then you had templating languages. Twig, Smarty, blade, etc. Then you had social media, where everyone wanted a piece of the Facebook pie. Then Crypto. Then NFTs. Now it's LLM. Tomorrow it will be something different.


Building an LLM wrapper is quick money right now, thus the abundance of pointless businesses, and AI fatigue.

I optimistically think that when the initial interest weans off, people will find creative ways to leverage generative ai in less obvious, but more practical fashion.



> All of Google is AI. > But you don’t see them bandying it around like a kid with new light up shoes.

https://www.youtube.com/watch?v=YivUOqd91Nk



I think the key sentence is this:

> But you don’t see them bandying it around like a kid with new light up shoes.

Yes, because Google and other big players are not after VC money, while all the companies that put it in their marketing copy are.



No; that's totally untrue. Large, public firms which (by definition) have no VC money are doing it too. I mean ... Microsoft?


I call it the Canva-fication of AI.

I was at the Hubspot CRM conference a month ago and every single vendor seemed to be using that accursed emoji in their marketing materials and websites.



I totally disagree about canva, some of their "magic" tools are really useful. I don't care what they call it


This seems like an AI written article bashing AI. Complete with a numbered list where the AI forgot how to count. Oh and it's literally just SEO article spam selling a "workshop".

>I’m teaching a workshop soon! An Invitation

200 points at this time, next AI article by the author: How to game HN and get your article spam on the front page! (SIGN UP FOR MY NEWSLETTER TO CONTINUE READING)



Machine learning is done with python, AI is with in powerpoint slides and marketing pages.


Could be worse. Remember the crypto / blockchain hype of 2 weeks ago?


You forgot the "Metaverse"


I wonder how long until Microsoft will add "AI" to Calculator.


Shit writing, shit take, [current thing], #1 on HN, how the mighty have fallen.


I make (sigh) AI for a living, and arguably have been since before we started calling it AI.

Based on my own first-hand experience, if the first thing a company has to say about a product or feature is that it's powered by AI, that is a strong signal that it isn't actually very useful. If they had found it to be useful for reliably solving one or more real, clearly-identified problems, they would start by talking about that, because that sends a stronger signal to a higher-quality pool of potential customers.

The thing is, companies who have that kind of product are relatively rare, because getting to that point takes work. Lots of it. And it's often quite grueling work. The kind of work that's fundamentally unattractive to the swarms of ambitious, entrepreneurial-minded people looking to get rich starting their own business who drive most attempts at launching new products.



The hype gets pushed down from the C-suite because prospects are always asking "are you doing anything with ${latest}?" and the salesdroid has to answer "of course! we'll be showing a teaser in the next couple of months".

Then it gets pushed up from the bottom by engineers practicing Resume-Driven Development. Everybody perks up when a project using ${latest} gets mentioned in the CTO's office. Wouldn't it look cool to say I was a pioneer in ${latest}?

When it's being pushed from the top and the bottom, it's gonna happen.

Left out of the process is thoughtful/imaginative product design and innovation. Sometimes it happens but it's more of an accident in most cases.



I worked in Google for 9 years and even up at the director level there was no way to avoid this.

You either contradicted it and got defunded, slowed it to apply it appropriately and got removed from the project due to not appear ambitious enough, or you went full speed on prematurely scaling an application of it and inevitable failed at scale.

I did founding work on the Google assistant and I was caught in this exact conundrum. There was no solution.



When otherwise smart people do seemingly dumb things, you have to ask if there is some rational explanation for the behavior. My take, having experienced everything you describe here, is that from upper management's position, each of these new shiny objects is a bet. Maybe it'll work out (networking, internet, cloud) or it'll go bust (push, blockchain, etc).

If it works out, the company gets to ride a new tech wave while avoiding obsolescence (see Yahoo, DEC, Sun, etc). If it doesn't pan out, the company writes off the investment and moves on to the next shiny thing.

From the leadership perspective, it actually makes sense to jump on the latest shiny thing. From the mid-level manager's perspective, it sucks to be the one who has to go make sense of it.



I spent time at Google and found this to be the case as well. I think the only "cure" for this is good upper management that is isn't swayed by the flavor of the moment hype, and also a culture of being monomaniacally product-focused.

Places that are intensely product-focused aren't immune from frothy hype, but at least it's forced through the critical filter of "ok but what does this do for our product really", which is a vital part of separating the wheat from the chaff when it comes to new ideas.

My main beef with Google is that the company's culture is intensely not product-focused. The company's defined by its origin story of a groundbreaking technology that happened upon product-market fit, and it's pathologically unable to do the opposite: start with a clear-eyed product vision and working backwards to the constituent technologies.



Maybe a minor tangent, but I really enjoyed playing with Google Assistant when it first came out. Great novelty, especially asking it for jokes.


All aboard the Zeitgeist Express, next stop Adrenalineville!


> if the first thing a company has to say about a product or feature is that it's powered by AI, that is a strong signal that it isn't actually very useful.

Great take-away, and we know this is true because of other examples from the past. Remember when every product had to be made out of Blockchain, and startups lead their marketing copy with "Blockchain-powered"? We're doing the same thing with AI.

Generative AI is a developer tool, not a product. Like the programming language you used, the fact that you are using this tool should not be relevant to users. If you have to mention AI to explain what your product does, you're probably doing it wrong. Some of these "AI startups" pitches sound ridiculous. "We use AI to... [X]" is like saying "We use Python to... [X]". Who cares? You're focusing on a detail of what the solution is before we've even agreed I have a problem.



Corollary: If a product marketed as AI is useful, that's a strong signal it's a logistic regression.




Even when I have a model that isn't logistic regression, there is always a logistic regression stage at the end for probability calibration.

I mean, what good is a prediction that is 50% accurate? If you are classifying documents for a recommendation model a "up/down" classification is barely useful, a probability calibrated classification is golden. With no calibration you have an arXiv paper, with a calibration you can build a classifier into a larger system that takes actions under uncertainty.

The generative paradigm holds progress back. You can ask ChatGPT to do anything and it will do it with 70-90% accuracy in all but the hardest cases. Screwing around with prompts can get you closer to the high end of that range, but if you want to do better than that you've got to define your problem well and go through a lot of the grindy work that you had to do with symbolic A.I. and have always had to do with machine learning. (You're going to need a large evaluation set to know how well your prompt-based solution works, and know that it didn't get broken by a software update, at the very least.)

The image that comes to my mind, almost intrusively, is Mickey Mouse from the movie Fantasia where he shows various sins, laziness most of all

https://www.youtube.com/watch?v=VErKCq1IGIU

So many of these efforts show off terrible quality control. There is a site that has posted about 250 galleries (at a rate of 2 day) of about 70 pornographic images a piece generated by A.I. At best the model generates highly detailed images including the stitching on the seams of clothes, clothing with floral prints matching cherry blossom trees in the background and sometimes crowds of people that really click thematically. Then you notice the girls with two belly buttons and if you look enough you'll see some with 7 belly buttons and realize the model doesn't really understand the difference between body parts and skin so there is a nipple that looks like part of the bra rather than showing through the bra, etc.

Then there are the hideously distorted penises that are too long, too short, disembodied, duplicated, bifurcated, pointing in the wrong direction and would otherwise be nightmare fuel for anyone with castration anxiety.

If the wizard was in charge he'd be cleaning these up, I mean looking at 150 images a day and culling the worst is less than an hour of work. But no, Mickey Mouse is in charge.

"Chat" in "ChatGPT" is a good indication of what is going on because it is brilliant at chat where it can lean on a conversation partner to provide meaning and guidance and where the ability to apologize for mistakes really seduces people, even if it doesn't change its wrong behavior. The trouble is trying to get it to perform "off the leash" at a task that matters is a matter of pushing a bubble around under a rug, that "chasing an asymptote" situation is itself seductive and one of the worst problems in technology development that entraps the most sophisticated teams, but put it together with unsophisticated people who don't think systematically and a system which already has superhuman powers of seduction (e.g. "chat" as opposed to problem solving) and you are cruising for a bruising.



*linear logistic regression

I mean, a typical LLM is also logistic regression, but it's not linear.



>>If they had found it to be useful for reliably solving one or more real, clearly-identified problems, they would start by talking about that, because that sends a stronger signal to a higher-quality pool of potential customers.

THIS

The most basic thing about marketing is the table of

| Features | Functions | Benefits |

If $THING_01 is actually useful, any competent marketer or salesperson will talk about the BENEFITS, right up front

(And the good ones will also provide info on the Functions and Features for the curious or extra diligent customers, but not obscuring the benefits even if the info is readily accessible).

The main thing about marketing & selling is touting BENEFITS TO THE CUSTOMER.

"$THING_01 will make you sexier!!"

Not how it makes you sexier.

If they are talking about the feature of $THING_01 without also talking about the functions and benefits, they either have no benefits (and maybe even no function), or don't even understand their product.

Either way, do you really want to spend time and/or money on that company?



Double sigh. I am even guilty of building this because well investors need to see AI in our product description. So what do we do? Slap an ai button everywhere and call openai. Never mind that you could just do the same thing say by calling an existing python library!


A direct, objective comparison with a well-crafted set of heuristics is the kryptonite of many a deep learning model.


"if the first thing a company has to say about a product or feature is that it's powered by AI, that is a strong signal that it isn't actually very useful."

i have a product team that is totally disconnected from the engineering team. yeah, we use neural networks. they don't understand or know about neural networks so they just call everything "AI" and it's very cringe. but it doesn't mean we don't have good products.



More importantly you would want to keep the secret sauce secret. If I develop something actually near magical, I’m not going to blast the underlying technology from the rooftops, that’s my entire edge.


I've also been around a while and I agree completely, but I think articles such as this amount to more uncritical skepticism than substance.

The cat fairy was a cherry picked example.

Back in the early days of the internet I dealt a lot in retail. Adults would come in and say things like "What would I ever do with the internet?". Today feels a lot like those early days. Make of it what you will.



Back in the early days of the internet the market was flooded with ill-conceived Web-powered products. We don't remember any of them because they didn't last, but in the late 90s they were EVERYWHERE.

Similarly... it's not that I don't think machine learning is useful. I wouldn't have built my career on it if I didn't. But it is no more immune to Sturgeon's Law than the Information Superhighway was.



> Adults would come in and say things like "What would I ever do with the internet?". Today feels a lot like those early days. Make of it what you will.

That makes perfect sense, as online shopping wasn't what sold consumers on the Internet. It was email.

Online shopping came years later and early Amazon wasn't much more than an electronic mail-order catalog.

You wouldn't be able to sell Internet to somebody on the promise 'A lot of cool stuff is coming down the pipeline soon'. Same thing with consumer AI currently. Lots of potential, no killer app.



Also importantly, there are no points for predicting a broad field, only points for being correct on specific things.

There were a zillion "virtual mall" products in the early days of the internet. Exactly zero of them convinced anyone to buy stuff online. Amazon ended up cracking the formula and made billions doing it.

The investors in the virtual malls lost their money. And people on the sidelines who predicted that we would all shop online are IMO only correct in the most meaningless and facile sense, because they had no specific predictions of what would work and what wouldn't, just a vague gesture at a futuristic buzzword.

It's easy to wave generally in the direction of an abstract concept and say "that's a big deal", literally anyone can do it (and did! with crypto!), but it's specific predictions and hypotheses that separate those who know WTF they're talking about from LinkedIn thought-leadership pablum.

Likewise "AI is a big deal" in and of itself is not an astute statement or meaningful prediction. What about it? What specifically can be leveraged from this technology that would appeal to users? What specific problems can you solve? What properties of this technology are most useful and what roadblocks remain in the way of mass success?

pg coined the "middlebrow dismissal", I'd like to suggest a corollary: the middlebrow hype. All hot air without enough specificity to be worth anything.

"The information superhighway will be huge!" is the 90s equivalent of "AI is the future". Ok. How?



>> The cat fairy was a cherry picked example.

Every example of AI is a cherry picked example.



> if the first thing a company has to say about a product or feature is that it's powered by AI

They actually mean: we couldn't get it to work, so we added a black-box method to make it work, sometimes. And the examples on our website are all cherry-picked.



> and arguably have been since before we started calling it AI.

What was it like working in tech in the 1950s ?



> If they had found it to be useful for reliably solving one or more real, clearly-identified problems, they would start by talking about that, because that sends a stronger signal

Respectfully, I don’t think we’re there yet. You and I are tired of the overabused AI label, but for the wide public, as of today, it’s still a stronger selling point. A solution to a specific problem could only be sold to people struggling with this particular problem, a product with a flashy page and AI capabilities could be sold to a wide tail of not overly tech-savvy enthusiasts. Makes a good bang for a buck, even if in short term.



> but for the wide public, as of today, it’s still a stronger selling point.

Is it really? People care that their phone takes great pictures each and every time, I doubt think you need to add that the way you do this is by applying various machine learning algorithms.

Where A.I falls down for me is in the failure cases, simply telling me that more training is required or that the training set was incomplete isn't good enough. You need to be able to tell me exactly why the computer made the mistake, and the current A.I products can't do that. That should be a strong indicator to shy away from A.I powered products in many industries.



> There are apps who use AI well. They don’t call it AI because they are not children. All of Google is AI. But you don’t see them bandying it around like a kid with new light up shoes.

Except of course you do, and well before the recent generative AI hype too (which they've also leaned heavily into on the marketing side fwiw).







Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



Search:
联系我们 contact @ memedata.com