![]() |
|
![]() |
|
No doubt, people die of absolutely everything ever invented and also of not having invented some things. The best we can ever hope to do is find mitigations as and when problems arise. |
![]() |
|
If you get off the internet you'd not even realise these tools exists though. And for the statement that all jobs will be modelled to be true, it'd have to be impacting the real world.
|
![]() |
|
> has there been any real cases of this? Apparently so: https://www.businessinsider.com/jobs-lost-in-may-because-of-... Note that this article is about a year old now. > Comparing it to the internet is insane, based off of its status as a highly advanced auto complete. (1) I was quoting you. (2) Don't you get some cognitive dissonance dismissing it in those terms, at this point? "Fancy auto complete" was valid for half the models before InstructGPT, as that's all the early models were even trying to be… but now? The phrase doesn't fit so well when it's multimodal and can describe what it's seeing or hearing and create new images and respond with speech, all as a single unified model, any more than dismissing a bee brain as "just chemistry" or a human as "just an animal". |
![]() |
|
>Long, grammatically perfect comments that sound hollow and a bit lengthy It's worse than I thought. They've already managed to mimick the median HN user perfectly! |
![]() |
|
I'm not sure if people work like that — many of us have, as far as I can tell for millennia and despite sometimes quite severe punishments for doing so, been massive gossips.
|
![]() |
|
As soon as mimicking voices, text messages, human faces becomes a serious problem, like this case in UK [1], then citizens will demand a solution to that problem. I don't personally know how prevalent problems like that are as of today, but given the current trajectory of A.I. models which become smaller, cheaper and better all the time, soon everyone on the planet will be able to mimic every voice, every face and every handwritten signature of anyone else. As soon as this becomes a problem, then it might start bottom-up, citizens to government officials, rather than top to bottom, from president to government departments. Then governments will be forced to formalize identity solutions based on cryptography. See also this case in Germany [2]. One example like that, is bankruptcy laws in China. China didn't have any law regarding to bankruptcy till 2007. For a communist country, or rather not totally capitalist country like China, bankruptcy is not an important subject. When some people stop being profitable, they will keep working because they like to work and they contribute to the great nation of China. That doesn't make any sense of course, so their government was forced to implement some bankruptcy laws. [1]https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos... [2]https://news.ycombinator.com/item?id=39866056 |
![]() |
|
> What defences do we have when an LLM will be able to have a completely fluent, natural-sounding conversation in someone else's voice? The world learnt to deal with Nigerian Prince emails and nobody is falling to those anymore. Nothing was changed - no new laws or regulations needed. Phishing calls have been going on without an AI for decades. You can be skeptical and call back. If you know your friends or family you should be able to find an alternative way to get in touch always without too much effort in the modern connected world. Just recently a gang in Spain was arrested for "son in trouble" scam. No AI used. Most of the parents are not fooled in this. https://www.bbc.com/news/world-europe-68931214 The AI might have some marginal impact, but it does not matter in the big picture of scams. While it is worrisome, it is not a true safety concern. |
![]() |
|
If we decide not to gamble on that outcome, what would you do differently than what is being done now? The EU already approved the AI act, so legislation-wise we're already facing the problem.
|
![]() |
|
I remember Google being marginally better than Altavista but not much more. The cool kids in those days used Metacrawler, which meta searched all the search engines. |
![]() |
|
not really. Even a human bad at reasoning can take 1 hour of time to tinker around and figure things out. GPT-4 just does not have the deep planning/reasoning ability necessary for that.
|
![]() |
|
The only reason I still have a job is that it can't (yet) take full advantage of artefacts generated by humans. "Intern of all trades, senior of none", to modernise the cliché. |
![]() |
|
No, it's just the masses sniffing out the new fascinating techbro thing to make content about. In a way I'm sorry, that's what people do nowadays. I'd prefer it to be paid for, honestly. |
![]() |
|
Search was always a byproduct of Advertising. Don’t blame Google for sticking to their business model. We were naive to think we could have nice things for free. |
![]() |
|
The questions I ask ChatGPT have (almost) no monetary value for Google (programming, math, etc). The questions I still ask Google, have a lot of monetary value (restaurants, cloths, movie, etc). |
![]() |
|
> They have been generating hype for years now, yet the real-world impacts remain modest so far. I feel like everyone who makes this claim doesn't actually have any data to backup it up. |
![]() |
|
Chat GPT 3.5 has been neutered, as it it won't spit out anything that isn't overly politically correct. 4chan were hacking their way around it. Maybe that's why they decided it was "too dangerous".
|
![]() |
|
AGI being "just an engineering challenge" implies that it is conceptually solved, and we need only figure out how to build it economically. It most definitely is not. |
![]() |
|
People with your sentiment said the same thing about all cool tech that changed the world. Doesn't change the reality, a lot of professions will need to adapt or they will go extinct.
|
![]() |
|
> This is no different to saying a person with a gun murdered someone rather than attributing the murder to the gun. And “guns don’t kill people, people kill people”¹ is a bad argument created by the people who benefit from the proliferation of guns, so it’s very weird that you’re using that as if it were a valid argument. It isn’t. It’s baffling anyone still has to make this point: easy access and availability of guns makes them more likely to be used. A gun which does not exist is a gun which cannot be used by a person to murder another. It’s also worth nothing the exact words of the person you’re responding to (emphasis mine): > It can also murder people, and it will continue being used for that. Being used. As in, they’re not saying that AI kills on its own, but that it’s used for it. Presumably by people. Which doesn’t contradict your point. ¹ https://en.wikipedia.org/wiki/Guns_don%27t_kill_people,_peop... |
![]() |
|
The point is not the tool but how it's used. "What knives are allowed" is a moot point because a butter knife or letter opener can be used to kill someone.
|
![]() |
|
These models literally need ALL data. The amount of work it would take just to account for all the copyrights, let alone negotiate and compensate the creators, would be infeasible. I think it’s likely that the justice system will deem model training as fair use, provided that the models are not designed to exactly reproduce the training data as output. I think you hit on an important point though: these models are a giant transfer of wealth from creators to consumers / users. Now anyone can acquire artist-grade art for any purpose, basically for free — that’s a huge boon for the consumer / user. People all around the world are going to be enriched by these models. Anyone in the world will be able to have access to a tutor in their language who can teach them anything. Again, that is only possible because the models eat ALL the data. Another important point: original artwork has been made almost completely obsolete by this technology. The deed is done, because even if you push it out 70 years, eventually all of the artwork that these models have been trained on will be public domain. So, 70 years from now (or whatever it is) the cat will be out of the bag AND free of copyright obligations, so 2-3 generations from now it will be impossible to make a living selling artwork. It’s done. When something becomes obsolete, it’s a dead man walking. It will not survive, even if it may take a while for people to catch up. Like when the vacuum tube computer was invented, that was it for relay computers. Done. And when the transistor was invented, that was it for vacuum tube computers. It’s just a matter of time before all of today’s data is public domain and the models just do what they do. …but people still build relay computers for fun: https://youtu.be/JZyFSrNyhy8?si=8MRNznoNqmAChAqr So people will still produce artwork. |
![]() |
|
> Instinctively, I dislike a robot that pretends to be a real human being. Is that because you're not used to it? Honestly asking. This is probably the first time it feels natural where as all our previous experiences make "chat bots" and "automated phone systems", "automated assistants" absolutely terrible. Naturally, we dislike it because "it's not human". But this is true of pretty much any thing that approaches "uncanny valley". But, if the "it's not human" solves your answer 100% better/faster than the human counter part, we tend to accept it a lot faster. This is the first real contender. Siri was the "glimpse" and ChatGPT is probably the reality. [EDIT] https://vimeo.com/945587328 the Khan academy demo is nuts. The inflections are so good. It's pretty much right there in the uncanny valley because it does still feel like you're talking to a robot but it also directly interacting with it. Crazy stuff. |
![]() |
|
I wonder if you can ask it to change its inflections to match a personal conversation as if you're talking to a friend or a teacher or in your case... a British person?
|
![]() |
|
This is where Morgan Freeman can clean up with royalty payments. Who doesn’t want Ellis Boyd Redding describing ducks and math problems in kind and patient terms?
|
![]() |
|
> not wanting your race to be replaced Great replacement and white genocide are white nationalist far-right conspiracy theories. If you believe this is happening, you are the intellectual equivalent of a flat-earther. Should we pay attention to flat-earthers? Are their opinions on astronomy, rocketry, climate, and other sciences worth anyone's time? Should we give them a platform? > In the words of scholar Andrew Fergus Wilson, whereas the islamophobic Great Replacement theory can be distinguished from the parallel antisemitic white genocide conspiracy theory, "they share the same terms of reference and both are ideologically aligned with the so-called '14 words' of David Lane ["We must secure the existence of our people and a future for white children"]." In 2021, the Anti-Defamation League wrote that "since many white supremacists, particularly those in the United States, blame Jews for non-white immigration to the U.S.", the Great Replacement theory has been increasingly associated with antisemitism and conflated with the white genocide conspiracy theory. Scholar Kathleen Belew has argued that the Great Replacement theory "allows an opportunism in selecting enemies", but "also follows the central motivating logic, which is to protect the thing on the inside [i.e. the preservation and birth rate of the white race], regardless of the enemy on the outside." https://en.wikipedia.org/wiki/Great_Replacement https://en.wikipedia.org/wiki/White_genocide_conspiracy_theo... > wanting border laws to be enforced Border laws are enforced. > and not wanting your children to be groomed into cutting off their body parts. This doesn't happen. In fact, the only form of gender-affirming surgery that any doctor will perform on under-18 year olds is male gender affirming surgery on overweight boys to remove their manboobs. > You are definitely sane and your entire family is definitely insane. You sound brave, why don't you tell us what your username means :) You're one to stand by your values, after all, aren't you? |
![]() |
|
I wonder how it will work in real life and not in a demo… Besides - not sure if I want this level of immersion/fake when talking to a computer... "Her" comes to mind pretty quickly… |
![]() |
|
im human and much much more partial to typing than talking. talking is a lot of work for me and i can't process my thinking well at all without writing.
|
But it’s not scary. It’s… marvelous, cringey, uncomfortable, awe-inspiring. What’s scary is not what AI can currently do, but what we expect from it. Can it do math yet? Can it play chess? Can it write entire apps from scratch? Can it just do my entire job for me?
We’re moving toward a world where every job will be modeled, and you’ll either be an AI owner, a model architect, an agent/hardware engineer, a technician, or just.. training data.