(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=40345775

Google 对长尾关键词的搜索揭示了语言模型 (LLM) 和生成式 AI 模型对 DuckDuckGo 等平台的重大影响。 它们通过人工智能生成的内容导致社交媒体上的欺诈行为增加。 由于人工智能可以轻松地创建令人信服的文本和图像来欺骗人们,因此信任和社会凝聚力是主要问题。 现在有必要建议长辈忽略可疑的深夜信息。 尽管存在潜在的劳动力市场和生存风险,但对信任和社会和谐的直接威胁是紧迫的问题。 许多人变得不信任和孤立,这可能会导致严重的社会后果。 目前,认证手段有限,监管缺乏共识,导致个人产生混乱和无力感。 身份验证和数字签名解决方案方面的唾手可得的成果是可用的,但它们需要实施。 与夸大的说法相反,法学硕士并没有大幅取代工作岗位。 总体而言,法学硕士和生成人工智能带来的挑战值得仔细考虑并采取行动。

相关文章

原文


This is a very cool demo - if you dig deeper there’s a clip of them having a “blind” AI talk to another AI with live camera input to ask it to explain what it’s seeing. Then they, together, sing a song about what they’re looking at, alternating each line, and rhyming with one another. Given all of the isolated capabilities of AI, this isn’t particularly surprising, but seeing it all work together in real time is pretty incredible.

But it’s not scary. It’s… marvelous, cringey, uncomfortable, awe-inspiring. What’s scary is not what AI can currently do, but what we expect from it. Can it do math yet? Can it play chess? Can it write entire apps from scratch? Can it just do my entire job for me?

We’re moving toward a world where every job will be modeled, and you’ll either be an AI owner, a model architect, an agent/hardware engineer, a technician, or just.. training data.



> We’re moving toward a world where every job will be modeled

After an OpenAI launch, I think it's important to take one's feelings about the future impact of the technology with a HUGE grain of salt. OpenAI are masters of hype. They have been generating hype for years now, yet the real-world impacts remain modest so far.

Do you remember when they teased GPT-2 as "too dangerous" for public access? I do. Yet we now have Llama 3 in the wild, which even at the smaller 8B size is about as powerful as the [edit: 6/13/23] GPT-4 release.

As someone pointed out elsewhere in the comments, a logistic curve looks exponential in the beginning, before it approaches saturation. Yet, logistic curves are more common, especially in ML. I think it's interesting that GPT-4o doesn't show much of an improvement in "reasoning" strength.



A Google search for practically any long-tail keywords will reveal that LLMs have already had a very significant impact. DuckDuckGo has suffered even more. Social media is absolutely lousy with AI-powered fraud of varying degrees of sophistication.

It's glib to dismiss safety concerns because we haven't all turned into paperclips yet. LLMs and image gen models are having real effects now.

We're already at a point where AI can generate text and images that will fool a lot of people a lot of the time. For every college-educated young person smugly pointing out that they aren't fooled by an image with six-fingered hands, there are far more people who had marginal media literacy to begin with and are now almost defenceless against a tidal wave of hyper-scaleable deception.

We're already at a point where we're counselling elders to ignore late-night messages from people claiming to be a relative in need of an urgent wire transfer. What defences do we have when an LLM will be able to have a completely fluent, natural-sounding conversation in someone else's voice? I'm not confident that I'd be able to distinguish GPT-4o from a human speaker in the best of circumstances and I'm almost certain that I could be fooled if I'm hurried, distracted, sleep deprived or otherwise impaired.

Regardless of any future impacts on the labour market or any hypothesised X-risks, I think we should be very worried about the immediate risks to trust and social cohesion. An awful lot of people are turning into paranoid weirdos at the moment and I don't particularly blame them, but I can see things getting seriously ugly if we can't abate that trend.



> I'm not confident that I'd be able to distinguish GPT-4o from a human speaker in the best of circumstances and I'm almost certain that I could be fooled if I'm hurried, distracted, sleep deprived or otherwise impaired.

Set a memorable verification phrase with your friends and loved ones. That way if you call them out of the blue or from some strange number (and they actually pick up for some reason) and you tell them you need $300 to get you out of trouble they can ask you to say the phrase and they'll know it's you if you respond appropriately.

I've already done that and I'm far less worried about AI fooling me or my family in a scam than I am about corporations and governments using it without caring about the impact of the inevitable mistakes and hallucinations. AI is already being used by judges to decide how long people should go to jail. Parole boards are using it to decide who to keep locked up. Governments are using it to decide which people/buildings to bomb. Insurance companies are using to deny critical health coverage to people. Police are using it to decide who to target and even to write their reports for them.

More and more people are going to get badly screwed over, lose their freedom, or lose their lives because of AI. It'll save time/money for people with more money and power than you or I will ever have though, so there's no fighting it.



Or just ask them to tell them something only you both know (a story from childhood, etc). Reminds me of a book where this sort of thing was common (don't remember the title):

1. something you have

2. something you know

3. something you are

These three things are required for any authz.



For many people it would be better to choose specific personal secrets due to the amount of info online. I'm not a very active social media user, and what little I post tends not to be about me, but from reading 15 year old Facebook posts made by friends of mine you could definitely find at least one example on each of those categories. Hell, I think probably even from old work-related LinkedIn posts.


Lincoln already made that observation in the 1850s, "You can fool some of the people all of the time, and all of the people some of the time"

As technology advances those proportions will be boosted. Seems inevitable.



The way to get around your side channel verification phrase is by introducing an element of stress and urgency: "omg, help, I'm being robbed and they need $300 immediately or they'll hurt me, no time for a passphrase!" can additionally feign memory loss.

Alternatively while it may be difficult to trick you directly, phishing the passphrase from a more naive loved one or bored coworker and then parroting it back to you is also a possibility. 'etc.

Phone scams are no joke and this is getting past the point where regular people can be expected to easily filter them out.



I think humankind has managed massive shifts in what and who you could trust several times before.

We went from living in villages where everyone knew each other to living in big cities where almost everyone is a stranger.

We went from photos being relatively reliable evidence to digital photography where anyone can fake almost anything and even the line between faking and improving is blurred.

We went from mass distribution of media being a massive capital expenditure that only big publishers could afford to something that is free and anonymous for everyone.

We went from a tiny number of people in close proximity being able to initiate a conversation with us to being reachable for everyone who could dial a phone number or send an email message.

Each of these transitions caused big problems. None of these problems have ever been completely solved. But each time we found mitigations that limit the impact of any misuse.

I see the current AI wave as yet another step away from trusting superficial appearances to a world that requires more formal authentication protocols.

Passports were introduced long ago but never properly transitioned into the digital world. Using some unsigned PDF allegedly representing a utility bill as proof of address seems questionable as well. And the way in which social security numbers are used for authentication in the US is nothing short of bizarre.

So I think there are some very low hanging fruits in terms of authentication and digital signatures. We have all the tools to deal with the trust issues caused by generative AI. We just have to use them.



I tend to think the answer is to go back to villages, albeit digital ones. Authentication only enforces that an account is accessed by the correct "user", but particularly in social media many users are bad actors of various stripes. The strongest account authentication in the world doesn't help with that.

So the question, I think, is how do we reclaim trust in a world where every kind of content can be convincingly faked? And I think the answer is by rebuilding trust between users such that we actually have reason to simply trust the users we're interacting with aren't lying to us (and that also goes for building trust in the platforms we use). In my mind, that means a shift to small federated and P2P communication since both of these enable both the users and the operators to build the network around existing real-world relationships. A federation network can still grow large, but it can do so through those relationships rather than giving institutional bad actors as easy of an entrance as anyone else.



But this causes other problems such as the emergence of insular cultural or social cliques imposing implicit preconditions for participation.

Isn't it rather brilliant that you can just ask questions of competent people in some subreddit without first becoming part of that particular social circle?

It could also reintroduce geographical exclusion based on the rather arbitrary birth lottery.



No doubt, people die of absolutely everything ever invented and also of not having invented some things.

The best we can ever hope to do is find mitigations as and when problems arise.



Outside of the transition to a large city, virtually everything you've mentioned happened in the last 1/2 century. Even the phone was expensive, and not widely in use in under 100 years ago.

That's massive fast change, and we haven't culturally caught up to any of it yet.



Here's another one: We went from in-person story telling to wide distribution of printed materials, sometimes from pseudonymous authors.

This happened from the 15th century onward. By the 19th century more than half of the UK population could read and write.



If you get off the internet you'd not even realise these tools exists though. And for the statement that all jobs will be modelled to be true, it'd have to be impacting the real world.


Is it even possible to "get off the internet" without also leaving civilisation in general at this point?

> it'd have to be impacting the real world

By writing business plans? Getting lawyers punished because they didn't realise that "passes bar exam" isn't the same as "can be relied on for citations"? By defrauding people with synthesised conversations using stolen voices? By automating and personalising propaganda?

Or does it only count when it's guiding a robot that's not merely a tech demo?



I’ll be worried about jobs being removed entirely by LLMs when I see something outside of the tech bubble genuinely having been removed by one - has there been any real cases of this? It seems like hyperbole. Most people in the world don’t even know this exists. Comparing it to the internet is insane, based off of its status as a highly advanced auto complete.


Sure, but think about all of the jobs that won't exist because this studio isn't being expanded, well beyond just whatever shows stop being produced. Construction, manufacturing, etc.

Edit: Also this doesn't mean less medea, just less actual humans getting paid to make medea or work adjacent jobs



Not like there's nothing else to construct.

Maybe it's time to construct some (high[er] density) housing where people want to live? No? Okay, then maybe next decade ... but then let's construct transport for them so they can get to work, how about some new subway lines? Ah, okay, not that either.

Then I guess the only thing remains to construct is all the factories that will be built as companies decouple from China.



> has there been any real cases of this?

Apparently so: https://www.businessinsider.com/jobs-lost-in-may-because-of-...

Note that this article is about a year old now.

> Comparing it to the internet is insane, based off of its status as a highly advanced auto complete.

(1) I was quoting you.

(2) Don't you get some cognitive dissonance dismissing it in those terms, at this point?

"Fancy auto complete" was valid for half the models before InstructGPT, as that's all the early models were even trying to be… but now? The phrase doesn't fit so well when it's multimodal and can describe what it's seeing or hearing and create new images and respond with speech, all as a single unified model, any more than dismissing a bee brain as "just chemistry" or a human as "just an animal".



Sure and there’s endless AI generated blog spam from “journalists” saying LLMs are amazing and they’re going to replace our jobs etc… but get away from the tech bubble and you’ll see we’re so far away from that. Full self driving when? Autonomous house keepers when? Even self checkout still has to have human help most of the time and didn’t reduce jobs much. Call me a skeptic but HN is way too optimistic about this stuff.

Replacing all jobs except LLM developers? I’ll tell my hairdresser



Capabilities aren't the problem, cultural adoption is. Just yesterday I talked to someone who still googles solutions to their Excel table woes. Didn't they know of Copilot?

Maybe they didn't know, maybe none of their colleagues used it, their company didn't pay for it, or maybe all they need is an Excel update.

But I am confident that using Copilot would be faster than clicking through the sludge that are Microsoft Office help pages (third party or not.)

So I think it is correct to fear capabilities, even if the real world impace is still missing. When you invent an airplane, there won't be an airstrip to land on yet. Is it useless, won't it change anything?



HN comments, too. Long, grammatically perfect comments that sound hollow and a bit lengthy are everywhere now.

It's still early, and I don't see much in corporate communications, for instance, but it will be quite the change.



>Long, grammatically perfect comments that sound hollow and a bit lengthy

It's worse than I thought. They've already managed to mimick the median HN user perfectly!



Yes. The old heuristics of if something is generated by grammar and sentence structure don't work as well anymore. The thing that fucks me up the most about it is that I now constantly have to be uncertain about whether something is human or not. Of course, you've always had to be careful about misinformation on the internet, but this raises the scalability of false, hollow, and harmful output to new levels. Especially if it's a topic I'm trying to learn about by reading random articles (or comments), there isn't much of a frame of reference to what's good info and what's hallucinated garbage.

I fear that at some point the anonymity that made the internet great in the first place will be destroyed by this.



To be fair that was already the case for me before AI, Right at that time that companies, individual and governments found out that they could write subvert ads in the form of comments posts and 'organic' and they started to flood reddit, discord, etc.

The dead internet theory started to look more real with time, AI spam is just scaling it up.



We’ve reached a stage, where it would be advisable to not release recent photos of yourself, nor any video with sound clips to public, unless you want an AI fake instaperson of yourself starting to reach out to member of your externally visible social network, asking for money, emergency help, etc.

I guess we need to have an AI secretary to take in all phonecalls from now on (spam folder will become a lot more interesting with celebrity phone calls, your dead relative phoning you etc)



Hopefully, we will soon enter the stage where nobody believes anything they see anymore. Then, you no longer have to be afraid of being misinterpreted, because nobody is listening anymore anyway. Great time to be alive!


Luckily there’s a “solution” to that: Just don’t use the internet for dialogue anymore.

As someone that grew up with late-90’s internet culture and has seen all the pros and cons and changes over the decades, I find myself using the internet less and less for dialogue with people. And I’m spending more time in nature and saying hi to strangers in reality.

I’m still worries about the impact this will have on a lot of people’s ability to reason however. “Just” Tik Tok and apps like it has already had devastating results on certain demographics.



That's why I put it in quotation marks because it is a solution that will remain available, simply because the planet is really big and there'll always be places on the fringes. But it doesn't really solve the problem for society at large, it only solves it for an individual. But sometimes individuals showing other ways of living helps the rest of society see that there's choices where they previously thought there were none.


I don't know why anyone thinks this will happen. You can obviously write anything you want (we have an entire realm of works in this area that everyone knows about, fiction) and yet huge amounts of people believe passed around stories either from bad or faked media sources or entirely unsourced.


I'm not saying either you or the parent commenter is right or wrong, but fiction in books and movies are clearly fiction and we consume it as such. You are right that some people have been making up fake stories and others (the more naive) have been quick to believe in those false stories. The difference now is that it's not just text invented and written by a human, which takes time and dedication. Now it's done in a second. On top of that it's easy to enhance the text with realistic photos, audio and video. It becomes much more convincing. And this material is created in a few seconds or minutes.

It's hard to know what to believe if you get a phone call with the voice of your child or colleague, and your "child"/"colleague" replies within milliseconds in a convincing way.



I agree it's fundamentally different in application which I think will have a large impact (just like targeted advertising with optimisation vs billboards), but my point is that given people know you can just write anything and yet misinformation is abound - I don't see how knowing that you can fake any picture or video or sound leading to a situation where everyone just stops believing them.

I think unfortunately it will massively lower the trust of actual real videos and images, because someone can dismiss them with little thought.



What you see will be custom tailored to what you believe, and your loyalty will be won. Do what the AI says and your life will be better. It already knows you better than you know yourself. Maybe you're one of those holdouts who put off a smartphone until life became untenable wihout it. Life will be even more untenable without your AI personal assistant/friend/broker/coach/therapist/teacher/girlfriend to navigate your life for you.


Be glib, but that is one way for society to bring privacy back-and with it shared respect. I think of it as the “oh everyone has an anus” moment. We all know everyone has one and it doesn’t need to be dragged out in polite company.


I'm not sure if people work like that — many of us have, as far as I can tell for millennia and despite sometimes quite severe punishments for doing so, been massive gossips.


I think for most people it's far too late, as there exists at least something on the internet and that something is sufficient - photos can be aged virtually and a single photo is enough, voice doesn't change much and you need only a tiny sample, etc.

And that's the case even if you've never ever posted anything on your social media - it could be family&friends, or employer, or if you're ever been in a public-facing job position that has ever done any community outreach, or ever done a public performance with your music or another hobby, or if you've ever walked past a news crew asking questions to bystanders of some event, or if you've ever participated in some contests or competitions or sports leagues, etc, all of that is generally findable in various archives.



> photos can be aged virtually and a single photo is enough

I'm sure AI-based ageing can do a good enough job to convince many people that a fake image of someone they haven't seen for years is an older version of the person they remember; but how often would it succeed in ageing an old photo in such a way that it looks like a person I have seen recently and therefore have knowledge rather than guesses about exactly what the years have changed about them?

(Not a rhetorical question to disagree with you, I genuinely have no idea if ageing is predictable enough for a high % result or if it would only fool people with poor visual memory and/or who haven't seen the person in over a decade.)

I feel like even ignoring the big unknowns (at what age, if any, will a person start going bald, or choose to grow a beard or to die their hair, or get a scar on their face, etc.) there must be a lot of more subtle but still important aspects from skin tone to makeup style to hair to...

I've looked up photos of some school classmates that I haven't seen since we were teens (a couple of decades ago), and while nearly all of them I think "ah yes I can still recognise them", I don't feel I would have accurately guessed how they would look now from my memories of how they used to look. Even looking at old photos of family members I see regularly still to this day, even for example comparing old photos of me and old photos of my siblings, it's surprising how hard it would be for a human to predict the exact course of ageing - and my instinct is that this is more down to randomness that can't be predicted than down to precise logic that an AI could learn to predict rather than guess at. But I could be wrong.



> I guess we need to have an AI secretary to take in all phonecalls

Why not an AI assistant in the browser to fend all the adversarial manipulation and spam AIs on the web? Going online without your AI assistant would be like venturing without a mask during COVID

I foresee a cat-and-mouse game, AIs for manipulation vs AIs for protection one upping each other. It will be like immune system vs viruses.



I'm paranoid enough that I now modulate my voice and speak differently when answering an unknown phone call just in case they are recording and building a model to call back a loved one later. If they do get a call, they will be like, "why are you talking like that?"


> unknown phone calls

This is my biggest gripe against the telecom industry. Calls pretending to be from someone else.

For every single call, someone somewhere must know at least the next link in the chain to connect a call. Keep following the chain until you find someone who either through malice or by looking the other way allows someone to spoof someone else's number AND remove their ability to send the current link in the chain (or anyone) messages. (Ideally also send them to prison if they are in the same country.) It shouldn't be that hard, right?



Companies have complex telecoms but generally want the outside as one company number. Solution, the sender send a packet with the number they should get perceived as. Everyone sends this on. Everyone "looks the other way" by design haha


So what, gate that feature behind a check that you can only set an outgoing caller ID belonging to a number range that you own.

The technology to build trustable caller ID has existed for a long time, the problem is no one wants to be the one forcing telcos all over the world to upgrade their sometimes many decades old systems.



>Regardless of any future impacts on the labour market or any hypothesised X-risks

Discovering an asteroid full of gold, with as much gold as half the earth to put a modest number, would have huge impact to the labour market. Anything conductive like copper, silver, mining jobs would all go away. Also housing would be obsolete as we would all live in golden houses. A huge impact to the housing market, yet it doesn't seem such a bad thing to me.

>We're already at a point where we're counselling elders to ignore late-night messages from people claiming to be a relative in need of an urgent wire transfer.

Anyone can prove their identity, or identities, over the wire, wire-fully or wire-lessly, anything you like. When i did go to university, i was the only one attending the cryptography class, no one else showed up for a boring class like this. I wrote a story about the Electrona Corp in my blog.

What i say to people for at least 2 years now, is that "Remember when governments were not just some cryptographic algorithms?" Yeah, that's gonna change. Cryptography is here to stay, it is not as dead as people think and it's gonna make a huge blast.



> Discovering an asteroid full of gold, with as much gold as half the earth to put a modest number, would have huge impact

All this would do is crash the gold price. Also note that all the gold at our disposal right now (worldwide) basically fits into a cube with 20m edges (its not as much as you might think).

Gold is not suitable to replace steel as building material (because it has much lower strength and hardness), nor copper/aluminium as conductor (it's a worse conductor than copper and much worse in conductivity/weigth than aluminium). The main technical application short term would be gold plated electrical contacts on every plug and little else...



Regarding gold, i like this infographic [1], but my favorite from this channel is wolf population by country. Point being, that gold is shiny and beautiful, and it will be used even when it is not appropriate solution to the problem, just because it is shiny.

I didn't know that copper is a better conductor than gold. Surprised by that.

[1] https://www.youtube.com/watch?v=E2Gd8CRG0cc



> What i say to people for at least 2 years now, is that "Remember when governments were not just some cryptographic algorithms?" Yeah, that's gonna change. Cryptography is here to stay, it is not as dead as people think and it's gonna make a huge blast.

The thing about cryptography and government is that it's easy to imagine for a great technology to be adapted on the governmental level because of its greatness. But it is another thing to actually implement it. We live in a bubble, where almost anyone knows about cryptographic hashes and RSA, but for most of the people it is not the case.

Another thing is that political actors are tending to try to concentrate power in their own hands. No way they will delegate a decision making to any form of algorithm — being cryptographic or not.



As soon as mimicking voices, text messages, human faces becomes a serious problem, like this case in UK [1], then citizens will demand a solution to that problem. I don't personally know how prevalent problems like that are as of today, but given the current trajectory of A.I. models which become smaller, cheaper and better all the time, soon everyone on the planet will be able to mimic every voice, every face and every handwritten signature of anyone else.

As soon as this becomes a problem, then it might start bottom-up, citizens to government officials, rather than top to bottom, from president to government departments. Then governments will be forced to formalize identity solutions based on cryptography. See also this case in Germany [2].

One example like that, is bankruptcy laws in China. China didn't have any law regarding to bankruptcy till 2007. For a communist country, or rather not totally capitalist country like China, bankruptcy is not an important subject. When some people stop being profitable, they will keep working because they like to work and they contribute to the great nation of China. That doesn't make any sense of course, so their government was forced to implement some bankruptcy laws.

[1]https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos... [2]https://news.ycombinator.com/item?id=39866056



What does abating that trend look like? Most AI safety proposals I hear fall into the categories of a) we need to stop developing this technology or b) we need laws that entrench the richest and most powerful organizations in the world as the sole proprietors of this technology. Neighther of those actually sound better than people being paranoid weirdos about trusting text/video/voice. I think that's kinda where we need to be as a culture: these things are not trustworthy, they were only ever good as a rough heuristic, and now that ship has sailed. We have just finished a transition to treating the digital world as part of our "real" world, but it's time to step that back. Using the internet to interact with known trusted parties will still work fine, provided that some authentication can be shared out-of-band offline. Meeting people and discovering businesses and such? There will be more fakes and scams than real opportunities by orders of magnitude, and as technology progresses our filtering will only get worse. We need to roll back to "don't trust anything online, don't share your identity or payment information online" outside of, as mentioned, out-of-band verified parties. You can still message your friends and family, do online banking and commerce, but you can't initiate a relationship with a person or business online without some kind of trusted recommendation.


>What does abating that trend look like?

I don't think anyone has a good answer to that question, which is the problem in a nutshell. Job one is to start investing seriously in finding possible answers.

>We need to roll back to "don't trust anything online, don't share your identity or payment information online"

That's easy to say, but it's a trillion-dollar decision. Alphabet and Meta are both worthless in that scenario, because ~all of their revenue comes from connecting unfamiliar sellers with buyers. Amazon is at existential risk. The collapse of Alibaba would have a devastating impact on Chinese exporters, with massive consequent geopolitical risks. Rolling back to the internet of old means rolling back on many years worth of productivity and GDP growth.



> because ~all of their revenue comes from connecting unfamiliar sellers with buyers

Well that's exactly the sort of service that will be extremely valuable in a post-trust internet. They can develop authentication solutions that cut down on fraud at the cost of anonymity.



Point a) is just point b) in disguise. You're just swapping companies for governments.

This tech is dangerous, and I'm currently of the opinion that its uses for malicious purposes are far better and more significant than LLM's replacing anyone's jobs. The bullshit asymmetry principle is very incredibly significant for covert ops and asymmetric warfare, and generating convincing misinformation has become basically free overnight.



> What defences do we have when an LLM will be able to have a completely fluent, natural-sounding conversation in someone else's voice?

The world learnt to deal with Nigerian Prince emails and nobody is falling to those anymore. Nothing was changed - no new laws or regulations needed.

Phishing calls have been going on without an AI for decades.

You can be skeptical and call back. If you know your friends or family you should be able to find an alternative way to get in touch always without too much effort in the modern connected world.

Just recently a gang in Spain was arrested for "son in trouble" scam. No AI used. Most of the parents are not fooled in this.

https://www.bbc.com/news/world-europe-68931214

The AI might have some marginal impact, but it does not matter in the big picture of scams. While it is worrisome, it is not a true safety concern.



Sure, all tech has 'real' effects. It's kinda the definition of tech. But all of these concerns more or less fall into the category of "add it to the list of things you have to watch out for living in the 21st century" - to me, this is nothing crazy (yet)

The nature of this tech itself is probably what is getting most people - it looks, sounds and feels _human_ - it's very relatable and easy for a non-tech person to understand it and thus get creeped out. I'd argue there are _far_ more dangerous technologies out there, but no one notices and / or cares because they don't understand the tech in the first place!



>to me, this is nothing crazy (yet)

The "yet" is carrying a lot of weight in that statement. It is now five years since the launch of GPT-2, three years since the launch of GPT-3 and less than 18 months since the launch of ChatGPT. I cannot think of any technology that has improved so much in such a short space of time.

We might hit an inflection point and see that rate of improvement stall, but we might not; we're not really sure where that point might lie, because there's likely to still be a reasonable amount of low-hanging fruit regarding algorithmic and hardware efficiency. If OpenAI and their peers can maintain a reasonable rate of improvement for just a few more years, then we're looking at a truly transformational technology, something like the internet that will have vast repercussions that we can't begin to predict.

The whole LLM thing might be a nothingburger, but how much are we willing to gamble on that outcome?



If we decide not to gamble on that outcome, what would you do differently than what is being done now? The EU already approved the AI act, so legislation-wise we're already facing the problem.


> yet the real-world impacts remain modest so far.

I second that. I remember when Google search first came out. Within a few days it completely changed my workflow, how I use the Internet, my reading habits. It easily 5 ~ 10x the value of Internet for me over a couple of weeks.

LLMs is doing nothing of the sort for me.



And I'm sure that it's doing that for some people, but... I think those are mostly in the industry. For most of the people outside the tech bubble, I think the most noticeable impact it has had on their lives so far is that they've seen it being talked about on the news, maybe tried ChatGPT once.

That's not to say it won't have more significant impact in the future; I wouldn't know. But so far, I've yet to see the hype get realised.



Google was a step function, a complete leveling up in terms of usability of returned data.

ChatGPT does this again for me. I am routinely getting zero useful results on the first page or two of Google searches, but AI is answering or giving me guidance quickly.

Maybe this would not seem such an improvement if Google's results were like they were 10 years ago and not barely usable blogspam



> I am routinely getting zero useful results on the first page or two of Google searches, but AI is answering or giving me guidance quickly.

To me, this just sounds like Google Search has become shit, and since Google simply isn't going to give up the precious ad $$$ that the current format is generating, the next best thing is ChatGPT. But this is different from saying that ChatGPT is a similar step up like Search was.

For what it's worth, I agree with you that Google Search has become unusable. Google basically destroyed it's best product (for users), by turning it into an ad riddles shovelware cesspit.

That ChatGPT is similarly good like Google Search used to be, is a tragedy. Basically we had a conceptually simple product that functioned very well, and we are replacing it with a significantly more complex product.



OMG I remember trying Google when it was in beta, and HOLY CRAP what I had been using was like freakin night and day. AltaVista: remember that? That was the state of the art before that, and it did not compare. Night and day.


I remember Google being marginally better than Altavista but not much more.

The cool kids in those days used Metacrawler, which meta searched all the search engines.



Google was marginally better in popular searches and significantly better for tail searches. This is a big reason why it flourished with the technical and student crowd in earlier days because those exceedingly rare sub-sub-topics would get surfaced higher in the rankings. For the esoteric topics Yahoo didn't have it in catalog and Altavista maybe had it but it was on page 86. Even before spelling correction and dozens of other useful search features were added, it was tail search and finding what you were looking for sooner. Serving speed, too, but perhaps that was more subtle for some.

Metasearch only helps recall. It won't help precision, the metasearch still needs to rank the aggregate results.



I used Metacrawler, it was dog slow. The beauty of Google was it was super fast, and still returned results that were at least as good, and often better, than Metacrawler. After using Google 2-3 times I don’t think I ever used Metacrawler again.


> OpenAI are masters of hype. They have been generating hype for years now, yet the real-world impacts remain modest so far.

Perhaps.

> Do you remember when they teased GPT-2 as "too dangerous" for public access? I do. Yet we now have Llama 3 in the wild, which even at the smaller 8B size is about as powerful as the [edit: 6/13/23] GPT-4 release.

The statement was rather more prosaic and less surprising; are you sure it's OpenAI (rather than say all the AI fans and the press) who are hyping?

"""This decision, as well as our discussion of it, is an experiment: while we are not sure that it is the right decision today, we believe that the AI community will eventually need to tackle the issue of publication norms in a thoughtful way in certain research areas.

We are aware that some researchers have the technical capacity to reproduce and open source our results. We believe our release strategy limits the initial set of organizations who may choose to do this, and gives the AI community more time to have a discussion about the implications of such systems."""



That's fair: the statement isn't hyperbolic in its language. But remember that GPT-2 was barely coherent. In making this statement, I would argue that OpenAI was trying to impart a sense of awe and danger designed to attract the kind of attention that it did. I would argue that they have repeatedly invoked danger to impart a sense of momentousness to their products. (And to further what is now a pretty transparent effort to monopolize the tech through regulatory intervention.)


> (And to further what is now a pretty transparent effort to monopolize the tech through regulatory intervention.)

I disagree here also: the company has openly acknowledged that this is a risk to be avoided with regards to safety related legislation, what they've called for looks a lot more like "we don't want a prisoner's dilemma that drives everyone to go fast at the expense of safety" rather than "we're good everyone else is bad".



I can’t even get GPT 4 to reliably take a list of data and put it in a CSV. It gets a problem every single time.

People read too many sci-fi books and then project their fantasies on to real-world technologies. This stuff is incredibly powerful and will have social effects, but it’s not going to replace every single job by next year.



I remember when people used to argue about regex being bad or good, with a lot of low quality regex introducing bugs in codebases.

Now we have devs asking AI to generate regex formulas and pasting it into code without much concern on its validity.



Humans also stumble with that as well. Problems being CSV not really being that well defined and it is not clear to people how quoting needs to be done. The training set might not contain enough complex examples (newlines in values?)


Even if you get it to work 100% of the time, it will only be 99.something%. That's just not what it's for I guess. I pushed a few million items through it for classification a while back and the creative ways it found to sometimes screwup, astounded me.


Yeah and that's why I'm skeptical of the idea that AI tools will just replace people, in toto. Someone has to ultimately be responsible for the data, and "the AI said it was true" isn't going to hold up as an excuse. They will minimize and replace certain types of work, though, like generic illustrations.


> Do you remember when they teased GPT-2 as "too dangerous" for public access? I do.

I can't help but notice the huge amount of hindsight and bad faith that it demonstrated here. Yes, now we are aware that the internet did not drown in a flood of bullshit (well, not noticeably more), when GPT-2 was released.

But was it obvious? I certainly thought that there was a chance that the amount of blog spam that could be generated effortlessly might just make internet search unusable. You are declaring "hype", when you could also say "very uncertain and conscientious". Is this not something we want people in charge to be careful with?



I was going to say the same thing. For some real world estimation tasks where I don't want 100% accuracy (example: analysing working capital of a business based on balance sheet, analysing some images and estimating inventory etc.) the job done by GPT-4o is better than fresh MBA graduates from tier 2/tier 3 cities in my part of world.

Job seekers currently in college have no idea what is about to hit them in 3-5 years.



I agree. HN's and the tech bubble's bias many people are not noticing is that it's full of engineers comparing GPT-4 to software engineering tasks. In programming, the margin of error is incredibly slim in the way that a compiler either accepts entirely correct code (in its syntax of course) or rejects it. There is no in between, and verifying software to be correct is hard.

In any other industry where just need an average margin of error close to a human's work and verification is much easier than generating possible outputs, the market will change drastically.



not really. Even a human bad at reasoning can take 1 hour of time to tinker around and figure things out. GPT-4 just does not have the deep planning/reasoning ability necessary for that.


Reasoning and planning are different things. It's certainly getting quite good at deductive reasoning, especially when forced to check it's own arguments for flaws every time it states something. (I had a several hour chat with it yesterday, and I was very impressed about the progress.)

Planning is different in that it is an essential part of agency. That's what Q* is supposed to add. My guess is that planning is the next type of functionality to be added to GPT. I wouldn't be surprised if they already have a version internally with such functionality, but that they've decided to hold it back for now for reasons such as safety (some may care about the election this year) or simply that the inference costs are so huge they cannot possibly expose it publicly.



The only reason I still have a job is that it can't (yet) take full advantage of artefacts generated by humans.

"Intern of all trades, senior of none", to modernise the cliché.



I think you might be falling for selection bias. I guess you are surrounding yourself with a lot of smart people. "tinker around and figure things out" is definitely something certain humans (bad at reasoning) can't do. I already prefer the vision model when it comes to asking for a picture description (blind user) over many humans I personally know. The machine is usually more detailed, and takes the time to read the text, instead of trying to shortcut and decide for me whats important. Besides, people from the english speaking countries do not have to deal with foreign languages. Everyone else has to. "Aber das ist ja in englisch" is a common blocker for consuming information around here. I tell you, if we dont manage to ramp up education a few notches, we'll end up with even higher stddev when it comes to practical intelligence. We already have perfectly normal seeming humans absolutely unable to participate on the internet.


If everyone is average at reasoning then it must not be a very important trait or we’d all be at reasoning school getting better at it.

Really philosophy seems to be one of the least important subjects right now. Hardly anyone learns about it in school.

If it was so important to success in the wild than it would stand to reason we all work hard at improving our reasoning skills, but very few do.



What schools teach is what governments who set the curriculum like to think is important, which is why my English lessons had a whole section on the Shakespearean (400-year-old, English, Christian) take on the life and motivations of a Jewish merchant living in Venice, followed up with a 80 year old (at the time) English poem on exactly how bad it is to watch your friends choke to death as their lungs melt from chlorine gas in the trenches of the first world war.

These did not provide useful life-lessons for me.

(The philosophy A-level I did voluntarily seemed to be 50% "can you find the flaws in this supposed proof of the existence of god?")



Yeah. Open ai are certainly not masters of hype lol. They released their titular product to basically no fanfare or advertisement. ChatGPT took off on Word of Mouth alone. They dropped GPT-4 without warning and waited months to ship it's most exciting new feature (image input).

Even now, they're shipping text-image 4o but not the new voice while leaving old-voice up and confusing/disappointing a whole lot of people. This is a pretty big marketing blunder.



> ChatGPT took off on Word of Mouth alone.

I remember for a good 2-3 months in 2023 ALL you could see on tiktok / youtube shorts was just garbage about 'how amazing' ChatGPT was. Like - video after video and I was surprised of the repeat content being recommended to me... No doubt openAI (or something) was behind that huge marketing push



Is it not possible this would be explained by people simply being interested in the technology and TikTok/Youtube algorithms noticing that—and that they would have placed you in the same bubble, which is probably an accurate assignment?

I doubt OpenAI spent even one cent marketing their system (e.g. as in paying other companies to push it).



Well if you were a typical highly engaged TikTok or YouTube user, you are probably 13-18 years old. The kind of cheating in school that ChatGPT enabled is revolutionary. That is going to go viral. It's not a marketing push. After years of essentially learning nothing during COVID lockdowns, can you understand how transformative that is? It's like 1,000x more exciting than pirating textbooks, stealing Mazdas, or whatever culturally self-destructive life hacks were being peddled by freakshow brocolliheads and Kim Kardashian-alikes on the platform.

It's ironic because the OpenAI creators really loved school and excelled academically. Nobody cares that ChatGPT destroyed advertising copywriting. But whatever little hope remained for the average high schooler post-lockdowns, it was destroyed by instant homework cheating via ChatGPT. So much for safety.



No, it's just the masses sniffing out the new fascinating techbro thing to make content about.

In a way I'm sorry, that's what people do nowadays. I'd prefer it to be paid for, honestly.



"real-world impacts remain modest so far." Really? My Google usage has went down with 90% (it would just lead me to some really bad take from a journalist anyway, while ChatGPT can just hand me the latest research and knows my level of expertise). Sure it is not so helpful at work, but if OpenAI hasnt impacted the world I fail to see which company have in this decade.


“Replaced Google” is definitely an impact, but it’s nothing compared to the people that were claiming entire industries would be wiped out nearly overnight (programming, screenwriting, live support, etc).


Speak to some illustrators or voiceover artists - they're talking in very bleak terms about their future, because so many of them are literally being told by clients that their services are no longer required due to AI. A double-digit reduction in demand is manageable on aggregate, but it's devastating at the margin. White-collar workers having to drive Ubers or deliver packages because their jobs have been taken over by AI is no longer a hypothetical.


We had this in content writing and marketing last year. A lot of layoffs were going to happen anyway due to the end of ZIRP, AI came just at the right time, and so restructuring came bundled with "..and we are doing it with AI!".

It definitely took out a lot of jobs from the lowest rungs of the market, but on the more specialized / upper end of the ladder wages got actually higher and a lot of companies got burned, and now they have to readjust. It's rolling over slowly still, as there a lot of companies selling AI products and in turn new companies adopting those products. But it tells you a lot that

A) a company selling an AI assistant last year is now totally tied to automating busy work tasks around marketing and sales

B) AI writing companies are some of the busiest in employing human talent for... writing and editorial roles!

It's all very peculiar. I haven't seen anything like this in the past 15 years... maybe the financial crisis and big data was similar, but much much smaller at scale.



We should be thinking pretty hard right about now why this kind of progress and saving these expenses is a BAD thing for humanity. The answer will touch deeply ingrained ideas about what and who should underpin and benefit from progress and value in society.


Search was always a byproduct of Advertising. Don’t blame Google for sticking to their business model.

We were naive to think we could have nice things for free.



For the premium subscribers it'll be good, but they'll sure ruin the experience for free tier just like Spotify cause they just can't keep their business sustainable without showing vc's some profits.


I believe you, and I do turn to an LLM over Google for some queries where I'm not concerned about hallucination. (I use Llama 3 most of the time, because the privacy is absolute.)

But OpenAI is having a hard time retaining/increasing ChatGPT users. Also, Alphabet's stock is about as valuable as it's ever been. So I don't think we have evidence that this is really challenging Google's search dominance.



Google is an ad company. Ad prices are on an auction and most companies believe that they need ads. Less customers don't necessarily mean that the earnings go down, as when the clicks go down the prices might go up (without ad competitors). Ergo, they don't compete (yet at least).

But ChatGPT has really hurt Google's brand image.



The questions I ask ChatGPT have (almost) no monetary value for Google (programming, math, etc).

The questions I still ask Google, have a lot of monetary value (restaurants, cloths, movie, etc).



> They have been generating hype for years now, yet the real-world impacts remain modest so far.

I feel like everyone who makes this claim doesn't actually have any data to backup it up.



> yet the real-world impacts remain modest so far.

I spend a part of yesterday evening sorting my freshly dried t-shirts into 4 distinct piles. I used OpenAI Vision (through BeMyEyes) from my phone. I got a clear description of each and every piece of clothing, including print, colours and brand. I am blind BTW. But I guess you are right, no impact at all.

> Yet we now have Llama 3 in the wild

Yes, great, THANKS Meta, now the Scammers have something to work with. Thats a wonderful achievement which should be praised!



> I got a clear description of each and every piece of clothing, including print, colours and brand. I am blind BTW.

That is a really great application of this tech. And definitely qualifies as real-world impact. Thanks for sharing that!



> Do you remember when they teased GPT-2 as "too dangerous" for public access? I do.

Maybe not GPT-2, but in general LLMs and other generative AI types aren't without their downsides.

From companies looking to downsize their staff to replace them with software, to the work of artists/writers being devalued somewhat, to even easier scams and something like the rise of AI girlfriends, which has also gotten some critique, some of those can probably be a net negative.

Even when it's not pearl clutching over the advancements in technology and the social changes that arise, I do wonder how much my own development work will be devalued due to the somewhat lowered entry barrier into the industry and people looking for quick cash, same as with boot camps leading to more saturation. Probably not my position individually (not exactly entry level), but the market as a whole.

It's kind of at a point where I use LLMs for dev work not to fall behind, cause the productivity gains for simple problems and boilerplate are hard to argue with.



Chat GPT 3.5 has been neutered, as it it won't spit out anything that isn't overly politically correct. 4chan were hacking their way around it. Maybe that's why they decided it was "too dangerous".


Like another comment mentioned, sigmoid curves [1] are ubiquitous with neural network systems. Neural network systems can be intoxicating because it's so "easy" (relatively speaking) to go from nothing to 80% in extremely short periods of time. And so it seems completely obvious that hitting 100% is imminent. Yet it turns out that each percent afterwards starts coming exponentially more slowly, and we tend to just bump into seemingly impassable asymptotes far from where we'd like to be.

~8 years ago when self driving technology was all the rage and every major company was getting on board with ever more impressive technological demos, it seemed entirely reasonable to expect that we'd all be in a world of complete self driving imminently. I remember mocking somebody online around the time who was pursuing a class C/commercial trucking license. Yet now a decade later, there are more truckers than ever and the tech itself seems further away than ever before. And that's because most have now accepted that progress on such has basically stalled out in spite of absolutely monumental efforts at moving forward.

So long as LLMs regularly hallucinate, they're not going to be useful for much other than tasks that can accept relatively high rates of failure. And many of those generally creative domains are the ones LLMs are paradoxically the weakest in - like writing. Reading a book written by an LLM would be cruel and unusual punishment given then current state of the art. One domain I do see them completely taking over is search. They work excellently as natural language search engines, and "failure" in such is very poorly defined.

[1] - https://en.wikipedia.org/wiki/Sigmoid_function



I'm not really sure your self-driving analogy is apt here. Waymo has cars on the road right now that are totally autonomous, and just expanded its footprint. It has been longer and more difficult than we all thought, and those early tech demos were a glimmer of what was to come; then we had to grind to get there, with a lot of engineering.

I think what maybe seems not obvious amidst the hype is that there is a hell of a lot of engineering left to do. The fact that you can squash the weights of a neural net down to 3 bits per param and it still works -- is evidence that we have quite a way to go with maturing this technology. Multimodality, improvements to the UX of it, the human-computer interface part of it. Those are fundamental tech things, but they are foremost engineering problems. Getting latency down. Getting efficiency up. Designing the experience, then building it out.

25 years ago, early tech demos on the internet were promising that everyone would do their shopping, entertainment, socializing, etc... online. Breathless hype. 5 years after that, the whole thing crashed, but it never went away. People just needed time to figure out how to use it and what it was useful for, and discover its limitations. 10 years after that, engineering efforts were systematized and applied against the difficult problems that still remained. And now: look at where we are. It just took time.



I don't think he's saying that AGI is impossible — almost noone (nowadays) would suggest that it's anything but an engineering challenge. The argument is simply one of scale, i.e. how long that engineering challenge will take to solve. Some people are suggesting on the order of years. I think they're suggesting it'll be closer to decades, if that.


AGI being "just an engineering challenge" implies that it is conceptually solved, and we need only figure out how to build it economically.

It most definitely is not.



Waymo cars are highly geofenced in areas with good weather and good quality roads. They only just (in January) gained the capability to drive on freeways.

Let me know when you can get a Waymo to drive you from New York to Montreal in winter.



> Waymo cars are highly geofenced in areas with good weather and good quality roads. They only just (in January) gained the capability to drive on freeways

They are an existence proof that the original claim that we seem further than ever before is just wrong.



Why do some people gloat about moving goalposts around?

15 years ago self driving of any sort was pure fantasy, yet here we are.

They'll release a version that can drive in poor weather and you'll complain that it can't drive in a tornado.



It's been 8 years and I still don't have my autonomous car.

Meanwhile I've been using ChatGPT at work for _more than a year_ and it's been tremendously helpful to me.

This is not hype, this is not about how AI will change our lives in the future. It's there right here, right now.



> So long as LLMs regularly hallucinate, they're not going to be useful for much other than tasks that can accept relatively high rates of failure.

Yep. So basically they're useful for a vast, immense range of tasks today.

Some things they're not suited for. For example, I've been working on a system to extract certain financial "facts" across SEC filings. ChatGPT has not been helpful at all either with designing or implementing (except to give some broad, obvious hints about things like regular expressions), nor would it be useful if it was used for the actual automation.

But for many, many other tasks -- like design, architecture, brainstorming, marketing, sales, summarisation, step by step thinking through all sorts of processes, it's extremely valuable today. My list of ChatGPT sessions is so long already and I can't imagine life without it now. Going back to Google and random Quora/StackOverflow answers laced with adtech everywhere...



> So long as LLMs regularly hallucinate, they're not going to be useful for much other than tasks that can accept relatively high rates of failure.

But is this not what humans do, universally? We are certainly good at hiding it – and we are all good at coping with it – but my general sense when interacting with society is that there is a large amount of nonsense generated by humans that our systems must and do already have enormous flexibility for.

My sense is that's not an aspect of LLMs we should have any trouble with incorporating smoothly, just by adhering to the safety nets that we built in response to our own deficiencies.



I think it's more like an exponential curve where it looks flat moments before it shoots up.

mapping th genome was that way. On a 20yr schedule, barely any progress for 15 and then poof, done ahead of schedule



The sigmoid is true in humans too. You can get 80% of the way to being sort of good at a thing in a couple of weeks, but then you hit the plateau. In a lot of fields confidently knowing and applying this has made people local jack of all trades experts... the person that often knows how to solve the problem. But Jack is no longer needed so much. ChatJack got`s your back. Better to be a the person who knows one thing in excruciating detail and depth, and never ever let anyone watch you work or train on your output.


The two AI’s talking to each other was like listening to two commercials talking to each other. Like a callcenter menu that you cannot skip. And they _kept repeating themselves_. Ugh. If this is the future I’m going to hide in a cave.


> or just.. training data.

I have a much less "utopian" view about the future. I remember during the renaissance of neural networks (ca. 2010-15) it was said that "more data leads to better models", and that was at a time when researchers frowned upon the term Artificial Intelligence and would rather use Machine Learning. Fast forward a decade LLMs are very good synthetic data generators that try to mimic human generated input and I can't think somehow that this wasn't the sole initial intent of LLMs. And that's it for me. There's not much to hype and no intelligence at all.

What happens now is that human generated input becomes more valuable and every online platform (including minor ones) will have now some form of gatekeeping in place, rather sooner than later. Besides that a lot of work still can't be done in front of a computer in isolation and probably never will, and even if so, automation is not a means to an end. We still don't know how to measure a lot of things and much less how to capture everything as data vectors.



> We’re moving toward a world where every job will be modeled, and you’ll either be an AI owner, a model architect, an agent/hardware engineer, a technician, or just.. training data.

I understand that you might be afraid. I believe that a world where only LLM companies rule the world is not practically achievable except in some distopian universe. The likelihood of the world where the only job are model architects, engineers or technicians is very very small.

Instead, let's consider the positive possibilities that LLMs can bring. It can lead to new and exciting opportunities across various fields. For instance, can serve as a tool to inspire new ideas for writers, artists, and musicians.

I think we are going towards a more collaborative era where computers and humans interact much more. Everything will be a remix :)



> The likelihood of the world where the only job are model architects, engineers or technicians is very very small.

Oh, especially since it will be a priority to automate their jobs, or somehow optimize them with an algorithm because that's a self-reinforcing improvement scheme that would give you a huge edge.



My new PC arrives tomorrow. Once I source myself two RTX 3060's I'll be an AI owner, no longer dependant on cloud APIs.

Currently the bottleneck is Agents. If you want a large language model to actually do anything you need an Agent. Agents so far need a human in the loop to keep them sane. Until that problem is solved most human jobs are still safe.



GPT 4o incorporated multimodality directly in the neural network, while reducing inference costs to half.

I fully expect GPT 5 (or at the latest 6) to similarly have native inclusion of agentic capabilities either this year or next year, assuming it doesn't already, but is just kept from the public.



Going to put the economy in a very, very weird situation if true.

Will be like, the end of millions of careers overnight.

It will probably strongly favour places like China and Russia though, where the economy is already strongly reliant on central control.



Until the hallucination problem is solved, the output can't be trusted.

So outside of use-cases where the user can quickly verify the result (like picking a decent generated image etc.),I can't see it being used much.



RAG? Sure. I even implemented systems using it, and enabling it, myself.

And guess what: RAG doesn't prevent hallucination. It can reduce it, and there are most certainly areas where it is incredibly useful (I should know, because that's what earns my paycheck), but it's useful despite still hallucinations being a thing, not because we solved that problem.



> Can it just do my entire job for me?

All AIs up to now lack autonomy. So I'd say until we crack this problem, it is not going to be able to do your job. Autonomy depends on a kind of data that is iterative, multi-turn, and learning from environments not from static datasets. We have the exact opposite, lots of non-iterative, off-policy (human made AI consumed) text.



All I could think about when watching this demo was how similar capabilities will work on the battlefield. Coordinated AIs look like they will be obscenely effective.

Everything always starts as a toy.



The "Killer app" for AGI/ASI is, I suspect, going to be in robotics, even more so than in replacing "white collar workers".

That includes, beyond literal Killers, all kinds of manufacturing, construction and service work.

I would expect a LOT of funds to go into research all sorts of actuators, artificial muscles and any other technology that will be useful in building better robots.

Companies that can get and maintain a lead in such technologies may reach a position similar to what US Steel had in the 19th century.

That could be the next nvidia.

I would not be at all surprised if we will have a robot in the house in 10 years that can clean and do the dishes, and that is built using basically the same parts as the robots that replace our soldiers and the police.

Who will ultimately control them, though?



I would expect a LOT of funds to go into research all sorts of actuators, artificial muscles and any other technology that will be useful in building better robots.

If you had an ASI? I don’t think you’d need a lot of funds to go into this area anymore ? Presumably it would all be solved overnight.



its possible. Right now ai + robotics has been a big area of research for a while, and its very good at some tasks, see basically everything boston dynamics does wrt dynamically balancing. They help alongside control systems very well. However for multimodal task planning its not there. A year or two back I wrote a long comment about it but basically there is this idea of "grounding", basically connecting computer vision, object symbols/concepts, and task planning, which remains elusive. Its a similar problem with self driving cars - you want to be able to reason very strongly about things like "place all of the screws into the red holes" in a way that maps automatically to the actions for those things


Yes. As you say, a lot of the limitations so far has been the control part, which is basically AI.

Given the pace that AI is currently moving at, it seems to me that more and more, the mechanical aspect is becoming the limitation.

GPT 4o now seems to be quite good at reasoning about the world from pictures in real time. I would expect it would soon become easy for it to do the high level part of many practical tasks, from housekeeping to manufacturing or construction. (And of course military tasks.)

This leaves the direct low-level actuator control to execute such tasks in detail. But even there, development has been immense. See for instance these soccer playing robots [1]

And as both high level and low level control (if we assume that models soon will add agentic features directly into the neural networks), the only missing peace is the ability to build mechanically capable and reliable robots at a low enough price that they become cheaper than humans for various kinds of work.

There is one more limitation, of course, which is that GPT 4o still requires a constant connection to a data center, and that the models is too large to run within a device or machine.

This is also one of the most critical limitations of self driving. Had the AI within a Tesla had the same amount of compute available as GPT-4o, it should be massively more capable.

[1] https://www.youtube.com/watch?v=RbyQcCT6890



This is still gpt4. I don’t expect much more from this version than what previous version could do, in terms of reasoning abilities.

But everyone is expecting them to release gpt5 later this year, and it is a bit scary to think what it will be able to do.



It's quite different from gpt4 in two respects:

1) It's natively multi-modal in a way I don't think gpt4 was.

2) It's at least twice as efficient in terms of compute. Maybe 3 times more efficient, considering the increase in performance.

Combined, those point towards some major breakthroughs having gone into the model. If the quality of the output hasn't gone up THAT much, it's probably because the technological innovations mostly were leveraged (for this version) to reduce costs rather than capabilities.

My guess is that we should expect them to leverage the 2x-3x boost in efficiency in a model that is at least as large as GTP4 relatively soon, probably this year unless OpenAI has safety concerns or something, and keeps it internal-only.



There has been speculation that this is the same mystery model floating around on lmsys chat bot arena and they claim a real observable jump on elo scores but this remains to be seen some people don't think its even as capable as GPT4-Turbo so tbd


Branding aside, this pretty much is GPT 5.

The evidence for that is the change in the tokenizer. The only way to implement that is to re-train the entire base model from scratch. This implies that GPT 4o is not a fine-tuning of GPT 4. It's a new model, with a new tokenizer, new input and output token types, etc...

They could have called it GPT-5 and everyone would have believed them.



Pretty sure they said they would not release GPT-5 on Monday. So it's something else still. And I don't see any sort of jump big enough to label it as 5.

I assume GPT-5 has to be a heavier, more expensive and slower model initially.

GPT-4o is like an optimisation of GPT-4.



I’ve used it for a couple of hours to help with coding and it feels very similar to gpt4: still makes erroneous and inconsistent suggestions. Not calling it 4.5 was the right call. It is much faster though.

The expectations for gpt5 are sky high. I think we will see a similar jump as 3.5 -> 4.



Nature had been doing that for billions of years until a few decades ago when we were told "progress" meant we had to stop doing the same thing more peacefully and intentionally.

My guess is the future belongs to those who don't stop—who, in fact, embrace the opposite of stopping.

I would even suggest that the present belongs to those who didn't stop. It may be too late for normal people to ever catch up by the time we realize the trick that was played on us.



The present absolutely belongs to those who didn't stop, but it's been a lot longer than a few decades.

Varying degrees of greedy / restless / hungry / thirsty / lustful are what we've got, because how is contentedness ever going to compete with that over millennia?



It just occurred to me that this is one of the core things most successful religions have been trying to do in some form from the time they first arose.

I've had a lot of negative things to say about religion for many years. However, as has been often observed, 'perception is reality' to a certain extent when it affects how people behave, and perhaps it's kind of a counterweight against our more selfish tendencies. I just wish we could do something like it without made up stories and bigotry. Secular humanist Unitarians might be about the best we can do right now in my opinion... I'm hoping that group continues to grow (they have been in recent years).



People with your sentiment said the same thing about all cool tech that changed the world. Doesn't change the reality, a lot of professions will need to adapt or they will go extinct.


> People with your sentiment said the same thing about all cool tech that changed the world.

They also said it about all the over-hyped tech that did not change the world. This mentality of “detractors prove something is good” is survivorship bias.

Note I’m not saying you’ll categorically be proven wrong, just that your argument isn’t particularly strong or valid.



I am a PhD biophysicist working within the field of biological imaging. Professionally, my team (successfully) uses deep learning and GANs for a variety of tasks within the field of imaging, such as segmentation, registration, and predictive protein/transcriptomics. It’s good stuff, a game changer in many ways. In no way however, does it represent generalized AI, and nobody in the field makes this claim even though the output of these algorithms match or out perform humans in cases.

LLMs are no different. Like DL modules that are very good at outputting images that mimic biological signatures, LLMs are very good at outputting texts that eerily mimic human language.

However — and this is a point which programmers are woefully and comically ignorant — human language and reason are two separate things. Tech bros wholly confuse the two however, and thus make outlandish claims we have achieved or are on the brink of achieving — actual AI systems.

In other words, while LLMs and DL in general can perform specific tasks well, they do not represent a breakthrough in artificial intelligence, and thus will have a much narrower application space than actual AI.



If you've been in the field you really should know that the term AI has been used to describe things for decades in the academic world. My degree was in AI back before RBMs and Hintons big reveal about making things 100000 times faster (do the main step just once not 100 times and take 17 years to figure that out).

You're talking more about AGI.

We need "that's not AI" discussions like we need more "serverless? It's still on some server!!" discussions.



I think it's even incomparable to server vs serverless discussions.

It's about meaning of intelligence. These people don't have problems claiming that ants or dolphins are intelligent, but suddenly for machines to be classified as artificial intelligence they must be exactly on the same level as humans.

Intelligence is just about the ability to solve problems. There's no implication that in order for something to be intelligent it has to perform on at least the same level as top people in that field in the World.

It just has to be beyond a simple algorithm and be able to solve some sort of problem. You have AIs in video games that are just bare logic spaghetti computation with no neural networks.



This is true. But only to a point where mimicking and more broadly speaking, statistically imitating data, are understood in a more generalized way.

LLMs statistically imitates texts of real world. To achieve certain threshold of accuracy, it turns out they need to imitate the underlying Turing machine/program/logic that runs in our brains to understand/react properly to texts by ourselves. That is no longer in the realm of the old school data-as-data statistics I would say.



> LLMs are very good at outputting texts that eerily mimic human language.

What a bizarre claim. If LLMs are not actually outputting language, why can I read what they output then? Why can I converse with it?

It's one thing to claim LLMs aren't reasoning, which is what you later do, but you're disconnected from reality if you think they aren't actually outputting language.



> generalized AI

No one is talking about it being AGI. Everyone is talking about just AI specifically. I think your problem is thinking that AI = AGI.

For example AI in video games is very specific and narrow to its domain.



human language and reason are two separate things

... in the human brain which has evolved "cores" to handle each task optimally.

It's like the Turing Test. If it looks like it's reasoning, does it matter that it's doing it like a human brain or not?



"We shall not be very greatly surprised if a woman analyst who has not been sufficiently convinced of the intensity of her own wish for a penis also fails to attach proper importance to that factor in her patients" Sigmund Freud, in response to Karen Horney’s criticism of his theory of penis envy.


W-what? Lad, have you used chat-gpt? It can instantly give you intelligent feedback on anything (usually better than any expert community like 90% of the time.) On extremely detailed, specific tasks (like writing algorithms or refactoring) its able to spit out either working code or code so close to working that its still faster than what you could have done yourself. It can explain things better than probably 99.999% of teachers.

It will give you detailed examples that are much easier to follow than vague, error-prone spec docs. That's scratching the surface. Other people are far more creative than me and have used chat-gpt for mind-blowing stuff already. Whatever its doing passes for 'reasoning' and 'intelligence' in my book. To me it doesn't matter whether its the same kind of intelligence as a human or if there's any amount of awareness as those are both philosophical questions of no consequence to my work.

For what these pieces of tech can do I feel that they're drastically under-utilized.



(IMO) AI cannot murder people. The responsibility of what an AI does falls on the person who deployed it, and to a lesser extent the person who created it. If someone is killed by a fully autonomous weapon then that person has been murdered by the person or people who created and enabled the AI, not the AI itself.

This is no different to saying a person with a gun murdered someone rather than attributing the murder to the gun. An AI gun is just a really fancy gun.



> This is no different to saying a person with a gun murdered someone rather than attributing the murder to the gun.

And “guns don’t kill people, people kill people”¹ is a bad argument created by the people who benefit from the proliferation of guns, so it’s very weird that you’re using that as if it were a valid argument. It isn’t. It’s baffling anyone still has to make this point: easy access and availability of guns makes them more likely to be used. A gun which does not exist is a gun which cannot be used by a person to murder another.

It’s also worth nothing the exact words of the person you’re responding to (emphasis mine):

> It can also murder people, and it will continue being used for that.

Being used. As in, they’re not saying that AI kills on its own, but that it’s used for it. Presumably by people. Which doesn’t contradict your point.

¹ https://en.wikipedia.org/wiki/Guns_don%27t_kill_people,_peop...



There will come a time where complex systems can better be predicted with the use of AI than with mathematical predictions. One use-case could be, feeding body scans into them for cancer prevention. AFAIK this is already researched.

There may come a time where we grow so accustomed to this, that the decision is so heavily influenced by AI, that we believe it more than human decisions.

And then it can very well kill a human through misdiagnostic.

I think it is important to not just put this thought aside, but to evaluate all risks.



> And then it can very well kill a home through misdiagnosis.

I would imagine outcomes would be scrutinized heavily for an application like this. There is a difference between a margin of error (existing with human doctors as well) and a sentient ai that has decided to kill, which is what it sounds like you're describing.

If we didn't give it that goal, how does it obtain it otherwise?



The mass murder of Palestinians is already partially blamed or credited to an "AI" system that could identify people. Humans spent seconds reviewing the outcome. This is the reality of AI already being used to assist in killing. AI can't take the blame legally speaking, but it makes it easier to make the call and sleep at night. "I didn't order a strike on this person and their family of eight, the AI system marked this subject as a high risk, high value target". Computer-assisted dehumanization. (Not even necessarily AI)


Except that with a gun, you have a binary input (the trigger) so you can squarely blame a human for misunderstanding what they did when they accidentally shot someone on the grounds that the trigger didnt work.

A prompt is a _very_ different matter.



Yes, but a person wielding a knife has morals, a conscience and a choice, the fear is that an AI model does not. A lot of killer AI science fiction boils down to "it is optimal and logical that humanity needs to be exterminated"; no morality or conscience involved.


Which is why there are laws around what knives are allowed and what are banned. Or how we design knifes to be secure. Or how we have a common understanding what we do with knifes - and what not. Such as not giving them to toddlers... So what's your point?


The point is not the tool but how it's used. "What knives are allowed" is a moot point because a butter knife or letter opener can be used to kill someone.


We've had voice input and voice output with computers for a long time, but it's never felt like spoken conversation. At best it's a series of separate voice notes. It feels more like texting than talking.

These demos show people talking to artificial intelligence. This is new. Humans are more partial to talking than writing. When people talk to each other (in person or over low-latency audio) there's a rich metadata channel of tone and timing, subtext, inexplicit knowledge. These videos seem to show the AI using this kind of metadata, in both input and output, and the conversation even flows reasonably well at times. I think this changes things a lot.



The "magic" moment really hit in this, like you're saying. Watching it happen and being like "this is a new thing". Not only does it respond in basically realtime, it concocts a _whole response_ back to you as well. It's like asking someone what they think about chairs, and then that person being able to then respond to you with a verbatim book on the encyclopedia of chairs. Insane.

I'm also incredibly excited about the possibility of this as an always available coding rubber duck. The multimodal demos they showed really drove this home, how collaboration with the model can basically be as seamless as screensharing with someone else. Incredible.



Still patiently waiting for the true magic moment where I don't have to chat with the computer, I just tell it what to do and it does it without even an 'OK'.

I don't want to chat with computers to do basic things. I only want to chat with computers when the goal is to iterate on something. If the computer is too dumb to understand the request and needs to initiate iteration, I want no part.

(See also 'The Expanse' for how sci-fi imagined this properly.)



I want it to instruct me exactly how to achieve things. While agents doing stuff for me is nice, my agency is more important and investing into myself is best. Step by step, how to make bank -- what to say, what to do.


We'll get there.

For me, this is seriously impressive, and I already use LLMs everyday - but a serious "Now we're talkin" moment would be when I'd be able to stand outside of Lowes, and talk to my glasses/earbuds "Hey, I'm in front of lowes, where do I get my air filters from?"

and it tells me if it's in stock, aisle and bay number. (If you can't tell, I am tired from fiddling with apps lol)



As goofy as I personally think this is, it's pretty cool that we're converging on something like C3P0 or Plankton's Computer with nothing more than the entire corpus of the world's information, a bunch of people labeling data, and a big pile of linear algebra.


All of physics basically reduces to linear algebra locally (which becomes quite unilinear when enough tensors are multiplied).

Why shouldn't we expect AI to be created using the same type of math?

If there is a surprise, it's only that we can use the same math at a much higher level of abstraction than the quantum level.



Is this a trick question? OpenAI blatantly used copyrighted works for commercial purposes without paying the IP owners, it would only be fair to have them publish the resulting code/weights/whatever without expecting compensation. (I don't want to publish it myself, of course, just transform it and sell the result as a service!)

I know this won't happen, of course, I am moreso hoping for laws to be updated to avoid similar kerfuffles in the future, as well as massive fines to act as a deterrent, but I don't dare to hope too much.



I was envisioning a future where we've done away with the notion of data ownership. In such a world the idea that we would:

> have all of OpenAI's data for free

Doesn't really fit. Perhaps OpenAI might successfully prevent us from accessing it, but it wouldn't be "theirs" and we couldn't "have" it.

I'm not sure what kind of conversations we will be having instead, but I expect they'll be more productive than worrying about ownership of something you can't touch.



So in that world you envision someone could hack into openai, then publish the weights and code. The hacker could be prosecuted for breaking into their system, but everyone else could now use the weights and code legally.

Is that understanding correct?



I think that would depend on whether OpenAI was justified in restricting access to that data in the first place. If they weren't, then maybe they get fined and the hacker gets a part of that fine.

I'm not interested in a system where there are no laws about data, I just think that modeling them after property law is a mistake.



This is alleged, and it is very likely that claimants like New York Times accidentally prompt injected their own material to show the violation (not understanding how LLMs really work), and clouded in the hope of a big pay day rather than actual justice/fairness etc...

Anyways, the laws are mature enough for everyone to work this out in court. Maybe it comes out that they have a legitimate concern, but the way they presented their evidence so far in public has seriously been lacking.



With you 100% on that, except that after you defeat the copyright cartel, you'll have to face the final boss: OpenAI itself.

Either everybody should get the benefits of this technology, or no one should.



This is an anti-human ideology as bad as the worst of communism.

Humanity only survives as much as it preserves human dignity, let's say. We've designed society to give rewards to people who produce things of value.

These companies take that value and giving nothing back to the creators.

Supporting this will lead to disaster for all but the few, and ultimately for the few themselves.

Paying for your (copyrighted) inputs is harmony.



These models literally need ALL data. The amount of work it would take just to account for all the copyrights, let alone negotiate and compensate the creators, would be infeasible.

I think it’s likely that the justice system will deem model training as fair use, provided that the models are not designed to exactly reproduce the training data as output.

I think you hit on an important point though: these models are a giant transfer of wealth from creators to consumers / users. Now anyone can acquire artist-grade art for any purpose, basically for free — that’s a huge boon for the consumer / user.

People all around the world are going to be enriched by these models. Anyone in the world will be able to have access to a tutor in their language who can teach them anything. Again, that is only possible because the models eat ALL the data.

Another important point: original artwork has been made almost completely obsolete by this technology. The deed is done, because even if you push it out 70 years, eventually all of the artwork that these models have been trained on will be public domain. So, 70 years from now (or whatever it is) the cat will be out of the bag AND free of copyright obligations, so 2-3 generations from now it will be impossible to make a living selling artwork. It’s done.

When something becomes obsolete, it’s a dead man walking. It will not survive, even if it may take a while for people to catch up. Like when the vacuum tube computer was invented, that was it for relay computers. Done. And when the transistor was invented, that was it for vacuum tube computers.

It’s just a matter of time before all of today’s data is public domain and the models just do what they do.

…but people still build relay computers for fun:

https://youtu.be/JZyFSrNyhy8?si=8MRNznoNqmAChAqr

So people will still produce artwork.



> The amount of work it would take just to account for all the copyrights, let alone negotiate and compensate the creators, would be infeasible.

Your argument is the same as Facebook saying “we can’t provide this service without invading your privacy” or another company saying “we can’t make this product without using cancerous materials”.

Tough luck, then. You don’t have the right to shit on and harm everyone else just because you’re a greedy asshole who wants all the money and is unwilling to come up with solutions to problems caused by your business model.



> So people will still produce artwork.

There's zero doubt that people will still create art. Almost no one will be paid to do it though (relative to our current situation where there are already far more unpaid artists than paid ones). We'll lose an immeasurable amount of amazing new art that "would have been" as a result, and in its place we'll get increasingly bland/derivative AI generated content.

Much of the art humans will create entirely for free in whatever spare time they can manage after their regular "for pay" work will be training data for future AI, but it will be extremely hard for humans to find as it will be drowned out by the endless stream of AI generated art that will also be the bulk of what AI finds and learns from.



AI will just be another tool that artists will use.

However the issue is that it will be much harder to make a career in the digital world from an artistic gift and personal style: one's style will not be unique for long as AI will quickly copy it and so make the original much less valuable.



AI will certainly be a tool that artists use, but non-artists will use it too so very few will ever have the need to pay an artist for their work. The only work artists are likely to get will be cleaning up AI output, and I doubt they'll find that to be very fulfilling or that it pays them well enough to make a living.

When it's harder to make a career in the digital world (where most of the art is), it's more likely that many artists will never get the opportunity to fully develop their artistic gifts and personal style at all.

If artists are lucky then maybe in a few generations with fewer new creative works being created, AI almost entirely training on AI generated art will mean that the output will only get more generic and simplistic over time. Perhaps some people will eventually pay humans again for art that's better quality and different.



Or comment on your coding in realtime with a snarky undertone.

If you give it access to the entire codebase at the same time that could work pretty well. Maybe even add an option to disable the sarcasm.



> Instinctively, I dislike a robot that pretends to be a real human being.

Is that because you're not used to it? Honestly asking.

This is probably the first time it feels natural where as all our previous experiences make "chat bots" and "automated phone systems", "automated assistants" absolutely terrible.

Naturally, we dislike it because "it's not human". But this is true of pretty much any thing that approaches "uncanny valley". But, if the "it's not human" solves your answer 100% better/faster than the human counter part, we tend to accept it a lot faster.

This is the first real contender. Siri was the "glimpse" and ChatGPT is probably the reality.

[EDIT]

https://vimeo.com/945587328 the Khan academy demo is nuts. The inflections are so good. It's pretty much right there in the uncanny valley because it does still feel like you're talking to a robot but it also directly interacting with it. Crazy stuff.



> Naturally, we dislike it because "it's not human".

That wasn't even my impression.

My impression was that it reminds me of the humans that I dislike.

It speaks in customer service voice. That faux friendly tone people use when they're trying to sell you something.



> It speaks in customer service voice. That faux friendly tone people use when they're trying to sell you something.

Mmmmm while I get that, in the context w/ the grandparent comment, having a human wouldn't be better then? It's effectively the same. Because, realistically that's a pretty common voice/tone to get even in tech support.



Being the same as something bad is bad.

There are different kinds of humans.

Some of them are your friends, and they're willing to take risks for you and they take your side even when it costs them something.

Some of them are your adversaries, overtly. They do not hide it.

Some of them pretend to be your friends, even though they're not. And that's what they modeled it on. For some reason.



Apologies, I'm doing my best, but I'm quite lost.

The problem is you don't like the customer service/sales voice because they "pretend to be your friends".

Let me know if I didn't capture it.

I don't think people "pretend to be my friend" when they answer the phone to help me sort out of airline ticket problem. I do believe they're trained to and work to take on a "friendly" tone. Even if the motive isn't genuine, because it's trained, it's way a nicer of an experience than someone who's angry or even simply monotone. Trying to fix my $1200 plane ticket is stressful enough. Don't need the CSR to make it worse.



Might be cultural, but I would prefer a neutral tone. The friendly tone gives some expectation of good result of the inquiry or of implication, which makes it worse when the problem is not solvable or not in the power of agent to solve - which many times it is - you don't call support for simple problems.

Of course I agree that "angry" is in most cases not appropriate, but still, I can see cases in which it might, for example, if the caller is really aggressive, curses, or blames unreasonably the agent, the agent could become angry. Training people that everybody will answer them "friendly" no matter their behavior does not sound good for me.



I wonder if you can ask it to change its inflections to match a personal conversation as if you're talking to a friend or a teacher or in your case... a British person?


This is where Morgan Freeman can clean up with royalty payments. Who doesn’t want Ellis Boyd Redding describing ducks and math problems in kind and patient terms?


> This is probably the first time it feels natural

Really? I found this demo painful to watch and literally felt that "cringe" feeling. I showed it to my partner and she couldn't even stand to hear more than a sentence of the conversation before walking away.

It felt both staged and still frustrating to listen to.

And, like far too much in AI right now, a demo that will likely not pan out in practice.



Emotions are an axiom to convey feelings, but also our sensitivity to human emotions can be a vector for manipulation.

Especially when you consider the bottom line that this tech will be ultimately be horned into advertising somehow (read: the field dedicated to manipulating you into buying shit).

This whole fucking thing bothers me.



> Emotions are an axiom to convey feelings, but also our sensitivity to human emotions can be a vector for manipulation.

When one gets to be a certain age one begins to become attuned to this tendency of others' emotions to manipulate you, so you take steps to not let that happen. You're not ignoring their emotions, but you can address the underlying issue more effectively if you're not emotionally charged. It's a useful skill that more people would benefit from learning earlier in life. Perhaps AI will accelerate that particular skill development, which would be a net benefit to society.



With AI you can do A/B testing (or multi-arm bandits, the technique doesn't matter) to get into someone's mind.

Most manipulators end up getting bored of trying again and again with the same person. That won't happen if you are a dealing with a machine, as it can change names, techniques, contexts, tones, etc. until you give it what its operator wants.

Maybe you're part of the X% who will never give in to a machine. But keep in mind that most people have no critical thinking skills nor mental fortitude.



Problem is, people aren't machines either: someone who's getting bombarded with phishing requests will begin to lose it, and will be more likely to just turn off their Wi-Fi than allow an AI to run a hundred iterations of a many-armed-bandit approach on them.


I think we often get better at detecting the underlying emotion with which the person is communicating, seeing beyond the one they are trying to communicate in an attempt to manipulate us. For example, they say that $100 is their final price but we can sense in the wavering of their voice that they might feel really worried that they will lose the deal. I don't think this will help us pick up on those cues because there are no underlying real emotions happening, maybe even feeding us many false impressions and making us worse at gauging underlying emotions.


> When one gets to be a certain age one begins to become attuned to this tendency of others' emotions to manipulate you

This is incredibly optimistic, which I love, but my own experience with my utterly deranged elder family, made insane by TV, contradicts this. Every day they're furious about some new things fox news has decided it's time to be angry about: white people being replaced (thanks for introducing them to that, tucker!), "stolen" elections, Mexicans, Muslims, the gays, teaching kids about slavery, the trans, you name it.

I know nobody else in my life more emotionally manipulated on a day to day basis than them. I imagine I can't be alone in watching this happen to my family.



What if this technology could be applied so you can’t be manipulated? If we are already seeing people use this to simulate and train sales people to deal with tough prospects we can squint our eyes a bit and see this being used to help people identify logical fallacies and con men.


That's just being hopeful/optimistic. There are more incentives to use it for manipulation than to protect from manipulation.

That happens with a lot of tech. Social networks are used to con people more than to educate people about con men.



Yes nothing more unreasonable than not wanting your race to be replaced, wanting border laws to be enforced, and not wanting your children to be groomed into cutting off their body parts. You are definitely sane and your entire family is definitely insane.


> not wanting your race to be replaced

Great replacement and white genocide are white nationalist far-right conspiracy theories. If you believe this is happening, you are the intellectual equivalent of a flat-earther. Should we pay attention to flat-earthers? Are their opinions on astronomy, rocketry, climate, and other sciences worth anyone's time? Should we give them a platform?

> In the words of scholar Andrew Fergus Wilson, whereas the islamophobic Great Replacement theory can be distinguished from the parallel antisemitic white genocide conspiracy theory, "they share the same terms of reference and both are ideologically aligned with the so-called '14 words' of David Lane ["We must secure the existence of our people and a future for white children"]." In 2021, the Anti-Defamation League wrote that "since many white supremacists, particularly those in the United States, blame Jews for non-white immigration to the U.S.", the Great Replacement theory has been increasingly associated with antisemitism and conflated with the white genocide conspiracy theory. Scholar Kathleen Belew has argued that the Great Replacement theory "allows an opportunism in selecting enemies", but "also follows the central motivating logic, which is to protect the thing on the inside [i.e. the preservation and birth rate of the white race], regardless of the enemy on the outside."

https://en.wikipedia.org/wiki/Great_Replacement

https://en.wikipedia.org/wiki/White_genocide_conspiracy_theo...

> wanting border laws to be enforced

Border laws are enforced.

> and not wanting your children to be groomed into cutting off their body parts.

This doesn't happen. In fact, the only form of gender-affirming surgery that any doctor will perform on under-18 year olds is male gender affirming surgery on overweight boys to remove their manboobs.

> You are definitely sane and your entire family is definitely insane.

You sound brave, why don't you tell us what your username means :) You're one to stand by your values, after all, aren't you?



Well, when you inquire someone why they don't want to have more children, they can shrug and say "population reduction is good for the climate" as ig serving the greater good, and completely disregard any sense of "patriotic duty" to have more children like some politicians such as Vladimir Putin, would like to instill. They can justify it just as easily as you can be derranged enough to call it a governemnt conspiracy.


Why can’t it also inspire you? If I can forgo advertising and have ChatGPT tutor my child on geometry and they actually learn it at a fraction of the cost of a human tutor why is that bothersome? Honest question. Why do some many people default to something sinister going on. If this technology shows real efficacy in education at scale take my money.


Because it is obviously going to be used to manipulate people. There is absolutely 0 doubt about that (and if there is I'd love to hear your reasoning). The fact that it will be used to teach geometry is great. But how many good things does a technology need to do before the emotional manipulation becomes worth it?


AI is going to be fantastic at teaching skills to students that those students may never need, since the AI will be able to do all the work that requires such skills, and do them faster, cheaper and at a higher level of quality.


I don't think OpenAI is doing anything particularly sinister. But whatever OpenAI has today a bad actor will have in October. This horseshit is moving rather fast. Sorry, but in two years going from failing the turing test to being able to have a conversation with an AI agent nearly indistinguishable from a person is going to be destabilizing.

Start telling Grandma never to answer the phone.



> Especially when you consider the bottom line that this tech will be ultimately be horned into advertising somehow.

Tools and the weaponization of them.

This can be said of pretty much any tech tool that has the ability to touch a good portion of the population, including programming languages themselves, CRISPR?

I agree we have to be careful of the bad, but the downsides in this case are not so dangerous that we should be trying to suppress it because the benefits can be incredible too.



The concern is that it's being locked up inside of major corporations that aren't the slightest bit trustworthy. To make this safe for the public, people need to be able to run it on their own hardware and make their own versions of it that suit their needs rather than those of a megacorp.


These sorts of comments are going to go in the annals with the hackernews people complaining about Dropbox when it first came out. This is so revolutionary. If you're not agog you're just missing the obvious.


Good thing you can tell the AI to speak to you in a robotic monotone and even drop IQ if you feel the need to speak with a dumb bot. Or abstain from using the service completely. You have choices. Use them.


Until your ISP fires their entire service department in a foolish attempt to "replace" them with an overfunded chatbot-service-department-as-a-service and you have to try to jailbreak your way through it to get to a human.


I think pets often feel real emotions, or at least bodily sensations, and communicate those to humans in a very real way, whether thru barking or meowing or whimpering or whatnot. So while we may care for them as we care for a human, just as we may care for a plant or a car as a human, I think if my car started to say it felt excited for me to give it a drive, I might also feel uncomfortable.


They do, but they've evolved neoteny (baby-like cries) to do it, and some of their emotions aren't "human" even though they are really feeling them.

Silly example, but some pets like guinea pigs are almost always hungry and they're famous for learning to squeak at you whenever you open the fridge or do anything that might lead to giving them bell peppers. It's not something you'd put up with a human family member using their communication skills to do!



There’s definitely an element of evolution: domesticated animals have evolved to have human recognizable emotions. But that’s not to say they’re not “real” or even “human.” Do humans have a monopoly on joy? I think not. Watch a dog chase a ball. It clearly feels what we call joy in a very real sense.


Adult dogs tend to retain many of the characteristics that wolf puppies have, but grow out of when they become adults.

We've passively bred out many of the behaviors that lead to wolves becoming socially mature. Such dogs tend to be too dangerous to have around, since they may lead to the dogs challenging their owners (more than they already do) for dominance of the family.

AI's will probably be designed to do the same thing, so they will not feel threatening to us. But in the case of AGI/ASI, we will never know if they actually have this kind of subservience, or if they're just faking it for as long as it benefits them.



But I think this animosity is very much expected, no? Even I felt a momentary hint of "jealousy" -- if you can even call it that -- when I realized that we humans are, in a sense, not really so special anymore.

But of course this was the age-old debate with our favorite golden-eyed android; and unsurprisingly, he too received the same sort of animosity:

Bones was deeply skeptical when he first met Data: "I don't see no points on your ears, boy, but you sound like a Vulcan." And we all know how much he loved those green-blooded fools.

Likewise, Dr. Pulanski has since been criticized for her rude and dismissive attitudes towards Data that had flavors of what might even be considered "racism," or so goes the Trekverse discussion on the topic.

And let's of course not forget when he was on trial essentially for "humanity," or whether hew as indeed just the property of Starfleet, and nothing more.

More recent incarnations of Star Trek: Picard illustrated the outright ban on "synthetics" and indeed their effective banishment; non-synthetic life -- from human to Roman -- simply weren't ok with them.

Yes this is all science fiction silliness -- or adoration depending on your point of view -- but I think it very much reflects the myriad directions our real life world is going to scatter (shatter?) in the coming years ahead.



To your point, there's been a lot of talk about AI, regulation, guardrails, whatever. Now is the time to say, AI must speak such that we know it's AI and not a real human voice.

We get the upside of conversation, and avoid the downside of falling asleep at the wheel (as Ethan Mollick mentions in "Co-Intelligence".)



Exactly. I'm not sure if this is brand new or not, but this is definitely on the frontier.

I was literally just thinking about this a few days ago... that we need a multi-modal language model with speech training built-in.

As soon as this thing rolls out, we'll be talking to language models like we talk to each other. Previously it was like dictating a letter and waiting for the responding letter to be read to you. Communication is possible, but not really in the way that we do it with humans.

This is MUCH more human-like, with the ability to interrupt each other and glean context clues from the full richness of the audio.

The model's ability to sing is really fascinating. It's ability to change the sound of its voice -- its pacing, its pitch, its tonality. I don't know how they're controlling all that via GPT-4o tokens, but this is much more interesting stuff than what we had before.

I honestly don't fully understand the implications here.



> Humans are more partial to talking than writing.

Is it so?

Speaking most of the time is for short exchange of information (pleasantries to essential information exchanges).

I prefer writing for long in-depth thought exchanges (whether by emails, blogs etc.)

In many cultures - European or Asian, people are not very loquacious in everyday life.



Time and place

I’m 100% a text everything never calls person but I can’t live without Alexa these days, every time I’m in a hotel or on vacation I nearly ask a question out loud.

I also hate how much Alexa sucks so this is a big deal. I spent years weeding out what it could do and can’t do so it will be nice to have one that I don’t have to treat like a toddler



I wonder how it will work in real life and not in a demo…

Besides - not sure if I want this level of immersion/fake when talking to a computer...

"Her" comes to mind pretty quickly…



> Humans are more partial to talking than writing.

Amazon, Google, and Apple have sunk literally billions of dollars into this idea only to find out that, no, we aren't.

We are with other humans, yes. When socialization is part of the conversation. When I'm talking to my local barista I'm not just ordering a coffee, I'm also maintaining a relationship with someone in my community.

But when it comes to work, writing >>> talking. Writing is clarity of ideas. Talking is cult of personality.

And when it comes to inputs/outputs, typing is more precise and more efficient.

Don't get me wrong, this is an incredibly revolutionary piece of technology, but I don't think the benefits of talking you're describing (timing, subtext, inexplicit knowledge) are achievable here either (for now), since even that requires HOURS of interaction over days/weeks/months of experiences for humans to achieve with each other.



I use voice assistants and find them quite useful, but I've had to learn the interface and memorise the correct trigger phrases. If GPT-4o works half as well in practice as it does in the demos, then it's categorically a different thing.


>> When I'm talking to my local barista I'm not just ordering a coffee, I'm also maintaining a relationship with someone in my community.

>>> But when it comes to work, writing >>> talking. Writing is clarity of ideas. Talking is cult of personality.

A lot of people think of their colleagues as part of a professional community as well, though.



Writing is only superior to conversation when weighed against discussions with more than 3 people. A quick call with one or two other people always results in more progress being made as long as everyone involved wants to get it done. Messaging back and forth takes much more time and often leads to misunderstandings.


It depends…

For example, I mentioned something to my contractor and the short thing he said back and his tone had me assume he understood.

Oh, he absolutely did not.

And, with him at least, that doesn’t happen when in writing.



im human and much much more partial to typing than talking. talking is a lot of work for me and i can't process my thinking well at all without writing.


I wouldn't call out the depression bit as a Gen Z exclusive. Millennials basically invented modern, every day, gallows humor. Arguably, they're also the ones to normalize going to therapy. Not to say that things aren't bad, just saying that part didn't start with Gen Z.


>Millennials basically invented modern, every day, gallows humor

lmao what.... they absolutely didn't

this is why no one should take anyone on this site seriously about anything, confidentally incorrect, easily conned into the next VC funded marketing project



That's the way of life.

Older people think younger people are stupid and reckless, and viceversa. And the younglings think they "figured it out" like no one before them. But no one ever tried to understand each other in the process. Rinse and repeat.

联系我们 contact @ memedata.com