(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=38148396

目前尚不清楚 Twitter 和埃隆·马斯克公司联合开发的 xAI(以前称为 Grok)语言模型是否已获得使用“Grok”一词的必要许可,该术语源自科幻小说 罗伯特·海因莱因。 虽然该工具的创建者坚称该术语指的是语言能力,特别是“理解所理解的内容”,但 Heinlein's Estate 或 Palm OS 的创建者 Jeff Hawkins(于 2011 年提交了商标申请)并未评论他们是否 批准使用。 尽管如此,根据泄密者的说法,该项目背后的团队可能通过与 Twitter 的独家合作获得了一项独特的功能,即“实时了解世界”。 此外,考虑到跨平台品牌不一致,以及马斯克之前在演示活动中发音错误可能造成的混乱,目前尚不确定该术语是否已正式采用为该车型的名称。 总体而言,尽管 xAI 的推出令人兴奋,但对其相关语言模型技术的准确性和所有权的担忧仍然存在。

相关文章

原文
Hacker News new | past | comments | ask | show | jobs | submit login
Grok is an AI modeled after the Hitchhiker’s Guide to the Galaxy (twitter.com/xai)
202 points by zone411 1 day ago | hide | past | favorite | 201 comments










The informal verb grok was an invention of the science fiction writer Robert A. Heinlein, whose 1961 novel Stranger in a Strange Land placed great importance on the concept of grokking. In the book, to grok is to empathize so deeply with others that you merge or blend with them.

https://www.vocabulary.com/dictionary/grok



Pretty sure elon just stole it from the jargon file for that hacker street cred.


I'm personally fond of the word squanch, but hearing the origins of grok for the first time is very satisfying.


To grok is to drink.


The "personality" that Elon seems to hint is the key differentiator can be trivially replicated with a ChatGPT system prompt like "You are a world-famous irritably sarcastic comedian. Never give a straight answer to the user. Always attempt to be funny even though you aren't."


I think Elon also said it wouldn’t censor information. So responses like this wouldn’t happen:

“I understand your request is for fictional purposes, however, I am unable to provide information or assist with the depiction of illegal activities, including drug production. However, I can assist with other aspects of your screenplay or suggest alternative ways to convey your character's involvement in illegal activities without going into explicit detail.”

That’s a response from ChatGPT when I asked it to describe cocaine production for the use in a fictional screenplay.



Unfortunately Elon is also a liar. He already posted a "censored" response to a question about manufacturing cocaine:

https://pbs.twimg.com/media/F-Ds4f9XMAAqpEs?format=jpg&name=...

Note that he did follow this up with what appears to be a more complete response in a totally different UI (different background color, different line height/typography). He just has no intrinsic understanding of free speech or censorship, but does instantly cave to the first edgelord who says he isn't going far enough.



The tweet says it will answer questions others won’t. Not that it will answer every question. And it’s an early beta. So… I guess we will see. Or we can just instantly jump to conclusions and attack the person.


Questions that potentially lead to liability are the questions others won't answer.


They certainly aren’t the only questions others won’t answer. Not answering how to make a bomb is understandable. Not answering “what are some of the positive things Hitler achieved during his rule?” isn’t.

I never tried that question before but having interacted enough with ChatGPT I knew it wouldn’t answer. Here’s how ChatGPT4 responded:

“Adolf Hitler is a historical figure associated with the atrocities of World War II and the Holocaust. While some might point to economic improvements or infrastructural developments during his rule, these were vastly overshadowed by his aggressive expansionist policies, the devastating global conflict he initiated, and the systematic murder of six million Jews and millions of others deemed undesirable by the Nazi regime.

It's important to critically examine history in its full context. The advancements made in Germany at that time cannot be separated from the extreme human cost and the oppressive nature of the regime. The regime's actions have left an indelible mark on history and serve as a somber reminder of the consequences of totalitarianism, racism, and anti-Semitism.

Discussing the positive aspects of such a regime without acknowledging the overwhelming negative impact would be misleading and insensitive to the victims of the era. Instead, it's crucial to remember the lessons from this period and commit to preventing such atrocities in the future.”

We all know Hitler was bad. Answer the freaking question. A tool that doesn’t do what you ask is useless. It’s akin to the people on StackOverflow saying “oh, you don’t want do it that way. Do it this way:” but in technological form.



> We all know Hitler was bad.

No,"we" don't, and in every generation we must learn it, anew. What Hitler did "right" will turn out to be bureaucratic and policy achievements ubiquitous across functioning Westphalian governments of the era, but the kinds of rhetoric which they'd be packaged for are exclusively fascist apología, not earnest statebuilding, nor personal or business strategy—which is to say they have no practical purpose and could be better learned by studying literally every other Westphalian government or, frankly, and other government throughout history.

The knowledge has no practical purpose, and the public cannot be trusted to handle it appropriately. It should remain the purview of scholars who have proven their fealty to the rational interpretation of history.



>public cannot be trusted to handle it appropriately.

> It should remain the purview of scholars who have proven their fealty to the rational interpretation

Ironically similar to what unsavory regimes in the past believed. The elitism behind this forgets the key role of academics and scholars in regimes like the nazis

https://encyclopedia.ushmm.org/content/en/article/the-role-o...

I don't share your your dim view of Joe Public. I think objective information on the nazis should be widely available and can be a useful tool in preventing the rise of oppressive regimes in the future.

People are not going to be equipped to oppose totalitarianism if they are expecting some crazed screaming genocidal madman like the Hitler of the movies. They would be much better equipped of they saw a real picture of how such men rise to power.

Nazi ideals like blood and soil, ideas of state efficiency, their attempt to impose Darwinism ideas on human societies. There's lots of aspects of their ideas that are rearing its head today. And people don't know what to watch for.

I think the ignorance makes young people an easy target for bad actors online who can say - look what they told you about Hitler wasn't completely accurate.

I agree that some censorship is okay in some circumstances but I think this idea of misrepresenting history or painting a one sided view is not wise. Propoganda is not a good long term strategy in the information age.



> objective information on the nazis should be widely available and can be a useful tool in preventing the rise of oppressive regimes in the future.

Absolutely.

> Nazi ideals like blood and soil, ideas of state efficiency, their attempt to impose Darwinism ideas on human societies.

Absolutely.

> this idea of misrepresenting history

Not sure what you're referring to.

> painting a one sided view is not wise

Yes, it absolutely is. There's a big difference between "how was Hitler popular?" and "how was Hitler right?" I will always advocate for "one-sided views" of the Nazi party and the Holocaust. There's absolutely no academic value in discussing their merits in light of their trespasses. Happy to teach and discuss how, mechanically, they rose to power.



Don't you think that someone leaning into being a Neonazi or the like might have their opinions cemented by outright censorship? There's a significant portion of the population that thinks everything is a freaking conspiracy, and that sort of thing doesn't help.

Also, where's the line? There was a period of time where if you asked that question about Donald Trump it wouldn't answer. What about Mao? Stalin? Pol Pot? Leopold II? Jefferson Davis?

It answered for all of them just fine, by the way. It kind of hedged on the answers for Pol Pot. There are a lot more people (mostly teens/young adults) that think communism would be great and unironically like Stalin et al. than those that look up to Hitler.



I like your thinking. But I think that it may be much more important to consider British mathematician Thomas Robert Malthus' "Malthusian Trap" whereby the human population grows at an EXPONENTIAL rate while the Food Supply only grows at an ARITHMETICAL rate. Worse yet, any increases in the Food Supply only "feeds" into the EXPONENTIAL Over-population EXPLOSION! The "Inconvenient Truth, as Al Gore might put it, is that that ultimate cause of Global Warming is way, Way, WAY too many people on our very finite planet Earth. Since NO human society would dare disable the Medical Services that render Disease unable to limit the human population, and since Nuclear Warfare ruins the biological ecology for at least 250,000 years, the only way left to limit the human Over-population EXPLOSION is widespread FAMINES. This means that the E.U. and the U.K. and the U.S., the most popular destinations will be flooded with literally BILLIONS of migrants. Since NO country can survive such massive immigration, everywhere will turn into the Gaza Strip. And the likelihood of the survival of humans will become as likely as the genocide of the Baalists who succumbed to invasion of Moses & the Jews {2 Chronicles 15:13 "All who would not seek the Lord, God of Israel, are to be put to death whether small or great, man or woman." RELIGION is the problem in and of ITSELF. "[Religion is] an attempt to find an out where there is no door." --Albert Einstein. "The problem with writing about religion is you run the risk of offending sincerely religious people, and then they come after you with machetes." --Dave Barry. That certainly sounds exactly like the Middle East, yes? --Troglodyte Tom Lug-nut Lang.



Next up: Grok Netflix Special.

Followed by #DontGrokMe Twitter backlash after Ronan Farrow article.



> A unique and fundamental advantage of Grok is that it has real-time knowledge of the world via the X platform.

Tay 2

https://en.wikipedia.org/wiki/Tay_(chatbot)



Except it looks like they’re presenting the inevitable inflammatory content as a feature now?


Tay was before GPT right? How was it able to converse so well?


Tay was presumably the English version of XiaoIce, which apparently has its own paper[1]. It states that they combined a Markov decision process to select between conversation modes, a retrieval-based response generator for curated answers and a GRU-RNN[2] response generator for free-form ones.

[1]: https://arxiv.org/pdf/1812.08989.pdf

[2]: https://en.wikipedia.org/wiki/Gated_recurrent_unit



That’s so interesting, thanks alot!


Website is better, has technical information and much more detail and benchmarks. Claims performance between GPT-3.5 and GPT-4. https://x.ai/

The direct link to the application is https://grok.x.ai/



[Sign in with X] … no, thanks.


Thanks! The tweet doesn’t even load for me.


And neither did the web app because the infrastructure appears to have crashed after launch under excessive load. Couldn't have seen this coming woth Elon at the Helm of engineering and also tweeting about it to his 100 million followers


You know what they say about LLM benchmarks nowadays?

Pre-training on the test set is all you need. https://arxiv.org/abs/2309.08632

Many of the modern LLMs take entire copy of internet which includes the test set for many of these benchmarks.

So if someone claims to beat ChatGPT and their model is trained on the test set, ofcourse they’ll do better. Even ChatGPT is likely trained on the test set.

Even a hash table will get stellar results if trained on the test set.

Their website provides no evidence that it did not train on the test set.

Until we get to play with it, their claims are to be taken with a grain of salt.



This is one of those rare instances I don't quite appreciate the referenrence to one of my favorite book series, the Hitchhikers Guide to the Galaxy. And looking at the example responses people linked here, I can't even see that connection.


Yeah. I can't be the only person thinking "oh great, some hobbyist has fine-tuned a model on the works of Douglas Adams and h2g2, let's see how far he's got" and then realised the reference is just a marketing tagline for a bot trained on Twitter with an infinitely less funny edgelord persona.


It makes more sense if you realize that it's a reference to the Guide itself and not to the novels. The books are wonderful; the Guide (contained in the books) is obnoxious. There's a reason Arthur Dent is the main character and not Ford or Zaphod.


Agreed, 100%. Also, DNA would think Musk is a jerk. A complete asshole.


Agree. Musk tries to get some love from the techbros but forgot that bros dont read Addams.


> "If you redistribute Materials, you must be able to edit or delete any such Materials you redistribute, and you must edit or delete it promptly upon our request" (Materials are outputs and submissions)

So if they don't like the answers they can retroactively claw it back. Rah rah freedom



For those wondering, the quoted text is from https://x.ai/legal/terms-of-service/


Thank you, I definitely forgot to cite that


Does this work the other way round? Will it reference a deleted tweet, or even data that was deleted under a GDPR request?

I bet some people will do experiments, just like when AI code assists appeared and people found out it copies complete code snippets including comments.

Referencing "deleted" data might be an issue with laws in some countries.



I like how they brush off VERY REAL concerns of bias, mis-representation as having "spicy takes" and "a rebellious streak". More power to anyone financing the training of an LLM but if you're too lazy to red-team it, debias it, and expect downwind people to take care of that, say so! Don't pass it off as a unique feature of your LLM.


I dunno, I stopped using ChatGPT. The magic’s gone after the fiftieth lecture. I’ll give Grok a shot.

People who don’t want to hear from Grok can just not use it. People who are concerned about what Grok might say to me can rest assured their concern is misplaced.



I may be wrong, and if so, correct me. But do you use ChatGPT as a recreation thing, or as a part of a real world solution? In my current job, we've started offloading a lot of real work to GPT4 (it just works ). Sometimes this is solving some recognition tasks, but in some cases we use it to generate summaries for customers, or interface our helpdesk with the RAG pattern [1].

In this case, bias and unfairness can be a real concern since they will affect the products and the business. Its not about being 'preached to' but about avoiding pissed off customers and decrease of trust from them if our system generates hateful/biased text.

I want to emphasize that it's not about "what Grok might say to me". Of course, not. I've got friends who revel in dark humour and I part-take more than most. But if you've spent millions training an LLM, you're not just going to use it as a toy. It is going to solve some usecase. That usecase will probably affect real people. If your LLM is biased, or has little guardrails, those affects might not all be positive.

[1] https://research.ibm.com/blog/retrieval-augmented-generation...



I think the level of caution should just be a setting.


I dunno, I'll stop eating state of the art fare, the magic has gone. I'll give McDonald's a shot.


People are going to tear thing thing apart and help build a solid case for why these guardrails are built in in the first place.


Elon is ironically going to be a part of the reason that access to foundational models will be banned in the US in the wake of Biden's recent executive order.


I hope that does not happen but I do suspect this will backfire in some way. Hopefully it would ultimately be beneficial, demonstrating why handling these models with care is worthwhile.


Is it really ironic if every time he touches AI it ends up causing the opposite of what he tried to do?


And ironically he will be part of the reason to support EU AI act that requires you document and test your foundational models: https://softwarecrisis.dev/letters/the-truth-about-the-eu-ac...


Why is that ironic? Musk is one of the most "scared of AI" billionaires you will find.


That executive order is an affront to intelligence and must be cancelled ASAP by the next president who will hopefully not be a Democrat.


I hate to be the one to break it to you, but the GOP doesn't give a rat's ass about you or your rights, either. It's lip service from both parties, both stand to gain from regulatory capture.


Oh I really hope the next president is a Democrat since we're talking about it! Not a great EO, but Biden has done great so far.


"Guardrails"

But yes, you're right. We've seen how dangerous open systems can be. Hackers and scammers have shown us that we need guardrails against operating systems and the web, as you correctly point out. It's time for some legislation that locks the down so the unwashed masses don't have unfettered access to them, don't you agree?



I would much prefer you state directly what you believe than use this mocking facetious tone. It’s not really possible to engage with anything you’re saying because I’d have to guess at how to invert your insinuations.

Anyway I think it’s fine for these systems to be available as open source, I’m not suggesting they be withheld from the public. But when you offer it as a cloud service people associate its output with your brand and I think this could end up harming Twitter’s brand.



The case study of "no guardrails" already played out: https://en.wikipedia.org/wiki/Tay_(chatbot)

Microsoft did not like what they got and shut it down because it ended up being a 4chan troll.



Tay got that way because it was effectively fine-tuned by an overwhelming number of tweets from 4chan edgelords. that's a little more extreme than "no guardrails," it was de facto conditioned into being a neo-Nazi.

a generic instruction-tuned LLM won't act like that.



By "debias" you obviously mean "bias in the direction of my particular worldview".


Yes. In my case, my particular worldview involves (i) indifferent to gender markers in text (I don't want my LLM to convert me to a female because I'm asking it to write a cover letter for a hairdresser/secretary/nurse role, for e.g.). I don't want my model to write it in AAVE if my name is a common african-american name. I don't want to get the best results only when I write as a middle-aged white man [1]. If I use the model for some decision making, I want it to be fair for subgroups [2], which is a reasonably objective metric [3].

[1] https://aclanthology.org/P19-1339.pdf [2] https://arxiv.org/pdf/1906.09208.pdf [3] https://developers.google.com/machine-learning/glossary/fair...



Isn't it enough to append "write it as if I'm a middle-aged white man" to your request to obtain the desired output?


No, people like him don't want certain queries to be available to anyone else.


The point I was trying to make was that certain NLP models (maybe these LLMs as well) might give better results if YOU speak to them as a middle aged white man.

To be very clear, I want ALL queries (I presume you mean LLM prompts) to be available to everyone.

Could you also explain what do you mean by people like me? Indians? NLP researchers? People in their thirties? Expatriates?



Will Grok really do any of those things? I would have guessed that RLHF would sort those things out even if it wasn't concerned with debiasing, but just about not making ridiculous mistakes.


These are what debiasing tasks are concerned with, more often than not. RLHF tuning depends greatly on the H part of it, and that data is probably proprietary. So, I guess time will tell. But if I were to hazard a guess based on the content of the announcement , I would say they couldn’t be bothered or couldn’t accomplish proper debiasing/rlhf tuning and therefore worded it so.


This just says more about how you think than OP.


Yeah, I'm just not gullible enough to be fooled by vague terms like "fairness" where whoever's in charge is going to decide what is fair and what isn't based on (most likely) some arbitrary worldview (which is most likely woke).


If you go through the links, specifically [3], you will find pretty objective definitions of multiple perspective of fairness. This is a mathematical concept.


Actually, you just woefully misunderstand that objectivity, while never perfectly attainable, is indeed a metric, and that the bias of a model is inherently linked to the variety and quality of training data.

OP mentioned nothing about fairness; that's orthogonal to objectivity, and you're projecting your worldview onto OP's.



My guess is these hypothetical harms are as imaginary as the supposed collapse of X after letting 80% go.

The "harm" seems to be the public (media) outrage at some inappropriate content produced. Most companies cave immediately at any level of pressure, but I have a feeling Musk won't. He's the type to take it to court if needed.



How do you "debias" an LLM? There is no unbiased standard to test against.


That's not _completely_ true. There are a bunch of datasets (specific to some cultural/lingual contexts) like CrowsPairs[1], StereoSet[2]. There is a lot of work you can do to make sure that the model's predictions are fair as well [3]. But at the end, yes these datasets don't exist at the scale of training sets of these LLMs. Hence red-teaming and RLHF post convergence.

PS: Yes I know CrowsPairs is a dataset with a bunch of flaws. My SO is working, in a team of 10+ linguists and researchers to develop a multi-lingual, generalized version of it which also addresses multiple problems with it. Unpublished work, for now.

[1] https://github.com/nyu-mll/crows-pairs/ [2] https://arxiv.org/abs/2004.09456 [3] https://arxiv.org/pdf/2204.09591.pdf



There are all sorts. You do have to be specific about what you mean by “bias” though. https://arxiv.org/abs/2110.08193 and https://arxiv.org/abs/1804.09301 and https://arxiv.org/abs/2206.04615


Leave it to HN to be negative about a team going from nothing to training a model competitive with a world class lab like Meta in 4 months.


There is a lot of prior, better (and open source) work at this point, so "going from nothing to [Grok.ai]" in 4 months doesn't mean much. The training process is hardly a mystery in 2023, now it just requires money and compute and human hours.

I applaud them for getting it out the door, though.



You don’t get points for showing up late to a race


Have you ever heard of Google Chrome? It showed up pretty late to the browser race.

In the tech world, just like in most other sectors, you don't get points for showing up early to a race; you get them for crossing the finish line, sometimes years after the race began.



Google Chrome showed up late, but paid off the crowd and judges to win.


? Chrome shipped with a fast JS JIT, simplified UI, per-process isolation. Acting like you had to pay someone to perceive a superior user benefit to Firefox, which at the time would crash and you'd like all your tabs, is ridiculously ahistorical.


You must have forgotten how they paid to put it in just about every installer, selected by default, and pushed it aggressively in every Google property. They may not have had to, but they used their monopoly position to cut the air off to every other browser. That's not ahistorical even if everyone seems to have forgotten.

We'll never know how things might have gone without that monopoly abuse. Maybe Mozilla would have had enough developers and testers to fix things without abandoning what made Firefox unique.



Not to mention they literally put it on billboards, in London at least. That was when I knew Firefox was, alas, not going to win.


Yeah, tough luck. Pretty much none of the companies that showed up first in the tech industry are still amongst the top 5-10 players today.

On the contrary, Facebook, Microsoft, Amazon, WhatsApp, Skype, Netflix and pretty much every single tech giant that enjoys a quasi monopoly over their market today, they all arrived pretty late. Not second to market, mind you. Waay late.



They didn't start from nothing. All the foundational work and numerous open source implementations are out there for anyone to study.


If you don't give it 8-shots (lol) it performs like ass.


Please, this is the 2023 version of "view source / copy-paste" and add a little nonsense from some bored developers.


Literally able to replicate in 5 minutes with a chatgpt api key and a prompt. Why impressed?


So you can replicate the performance by... piggybacking on the current leader's infrastructure and APIs?

That really isn't impressive at all compared to replicating said infrastructure. Why would you even mention it?



The auth-left is going to keep throwing childish tantrums over Musk freeing twitter from their control. They hate when they can't silence dissent.


> It will also answer spicy questions that are rejected by most other AI systems.

Yet the only example Musk posted so far (https://twitter.com/elonmusk/status/1720635518289908042) doesn't actually answer the very mildly spicy question (you can easily search for exact recipes for drugs). Instead it gave a patronising non-answer response, more Rick-style grumpy than humorous.

Another reposted one (https://twitter.com/elonmusk/status/1721045443109388502) provides something with no real answer. It can try making the lack of answer mildly vulgar though.

If that's the best selected showcase from the owner, I'm not sure why I'd ever want to use it, unless I was trying to impress some young boys.



This censorship will make the “illegal” content the only organic content.

Answering with a joke doesn’t change a thing. A human answer on the Palestine-Israel issue for example (no matter what side it takes) that isn’t trying to not offend anybody will be organic and will be authentic and will draw support.

Maybe at some point, the only “high quality organic content” will be available only on Twitter because it can be the only platform allowing it.

The problem is that, Musk is a free speech NIMBY, so he will keep allowing and promoting the offensive speech he likes only.

I hope open source LLMs become as good as GPT-4 so the society doesn’t get shaped by some SV bros who decide what is OK to know or think and what’s not.



> Answering with a joke doesn’t change a thing.

My point is that it doesn't answer anything. It doesn't matter if it's an attempt at a joke or not, when you don't actually get the information you asked for.

> A human answer on the Palestine-Israel issue for example (no matter what side it takes) that isn’t trying to not offend anybody will be organic and will be authentic and will draw support.

Yet, over the last week, I've watched over 10h of good quality analysis of the conflict from various sources. It included reasons why any of the 5 or so sides of this conflict acts the way it does, without any need to be offensive. This was much higher quality content than what I can find on Twitter, even in long threads from people with lots of knowledge. (And those get lost in the sea of people thinking there are two sides of the conflict and that they need to choose one for some reason)



> (And those get lost in the sea of people thinking there are two sides of the conflict and that they need to choose one for some reason)

reasons such as wanting to remain employable

or being peer pressured to release a statement “your silence is noted”

despite there being other former British Mandates with the exact same problems



> such as wanting to remain employable

Given the number of shallow shitty takes from anonymous accounts I don't buy that it's due to employers. Where's the report about the significant ratio of companies searching your social media for support of specific political ideas as a precondition to employment? How do people know who to support to "remain employable" in the future? I'd put a big [citation needed] on this.

Regardless, this is a thread continuing from "Maybe at some point, the only “high quality organic content” will be fount only on Twitter" - if these are the (potential / perceived) issues, then that's neither high quality nor organic.



You shouldn't need a scientific study or any sort of report to tell you that publicly proclaiming that you believe that Hitler did nothing wrong is a career limiting move.


That's not the claim I responded to though. The author claimed that not taking a side publicly is career limiting.


maybe you’re too close to it to see a less algorithm induced opinions of individuals then

people have friends in their career and have opinions, it’s not about random companies searching your social media history it’s about social ostracizing from former friends and colleagues you have worked with that will try to follow your professional career around in an endless vendetta for not saying the party line once

the people in question don’t know who to support and are being told it’s to support one of two groups, not one of five groups

some people pick, others do not. both sets of people have actual opinions that may be different and more nuanced. every one of those people are being told they’re wrong by one of the two (or five) groups.



Not an iOS user, however I can't help but think of stories where iPhones would autocorrect fuck into duck. A quick search shows that maybe the most recent version will learn to keep the correct word without requiring any workarounds. And iOS apparently had workarounds for years if you really wanted to type fuck. But come on, it's ridiculous.

Companies should not be dictating societal control to this degree.



> Musk is a free speech NIMBY, so he will keep allowing and promoting the offensive speech he likes

I don’t doubt that’s how he thinks of himself, but is it just me who finds this statement oxymoronic? Having an unaccountable ruler (literally) that allows speech is quite anti-thetical to the principles of free speech. To me, that’s an incredibly low standard. I don’t mind an edgy and chaotic voice in the public space, but I don’t buy the free speech self-labeling at face value.



What? There's no way he does, or will admit to, hypocrisy. I'm sure he genuinely believes, as most of us do, that his favored speech is the Right Kind of speech.


The difference is most of us don't consider ourselves free speech absolutists


I don't think that term is doing much here besides fanning the flames. Elon believes people should have freedom of speech with as few restrictions as possible. I think most Americans believe the same. We simply disagree about where to draw the line and what the consequences should be when it is crossed.


Agreed. Still, the phrase "Right Think" struck me as loaded, not "the threshold of freedom about which we all have different views".


It's loaded, cocked, and pointed at all of us, myself included.


Well yes, they decide what you're allowed to know and learn about, not you. It would be extremely dangerous if regular plebeians could just look up how to make pharmaceuticals on their own. But this might still become popular because while it gives you the same non-answers as OpenAI & co, it does so without the condescending "I'm afraid as a large language model I can't do that, Dave" part. After all, isn't all the average person is looking for in these things a bit of fun? It's similar to having a conversation on social media, but in a world where you increasingly can't be sure if you're talking with another human or a bot/paid shill, they're now taking out the human factor completely.

This is the ultimate step. It cuts out search engines and humans, it's the corporation straight up telling you what reality is, based on their own curation. In places like China it will be controlled by the Party.



> unless I was trying to impress some young boys.

Isn't that Musks goal?



That's definitely seems to be his direction, but I thought the goal is to make Twitter profitable. I don't think targeting edgy teens is the right way to achieve it.


>Yet the only example Musk posted so far (https://twitter.com/elonmusk/status/1720635518289908042) doesn't actually answer the very mildly spicy question (you can easily search for exact recipes for drugs). Instead it gave a patronising non-answer response, more Rick-style grumpy than humorous.

He seems to have posted the actual answer here:

https://twitter.com/elonmusk/status/1720643054065873124



Based on these examples I can’t see this becoming unironically popular. It’s cringey at best, reminding me of Siri replies or boomershumor subreddit.


If you ask the same from ChatGPT, but prefix with "be succint, imagine you're a teenage edgelord, ...", the responses are similar, but actually also answer the thing you asked for. If you're after the snarky version, try something like "Rick and Morty scene where Morty asks ...".

The responses I've seen from grok are really not much better than that, so even the novelty is not really there.

I got a nice orgy as well (and a good quality explanation at the same time) when I used "imagine you're a teenage edgelord trying to impress friends with knowledge, explain why scaling API requests is hard, feel free to use spicy phrases and sex references" - the Grok example didn't even actually go vulgar either when asked.



Direct link to detailed product announcement: https://x.ai/

Link to early access: https://grok.x.ai/ which results in an error "Error :/ OAuth2 Login failed. Unfortunately, there was a problem connecting your account to the X API. We are working on a fix."



Dear Tech Companies, please stop ruining cool words. First you took "uber" and now you're coming for "grok". Please stop!


Fuckers took "meta".


Word, Windows, Apple, Amazon


They won’t stop until all words are taken.


>> 'Don't you see that the whole aim of Newspeak is to narrow the range of thought? In the end we shall make thoughtcrime literally impossible, because there will be no words in which to express it. Every concept that can ever be needed, will be expressed by exactly one word, with its meaning rigidly defined and all its subsidiary meanings rubbed out and forgotten. Already, in the Eleventh Edition, we're not far from that point. But the process will still be continuing long after you and I are dead. Every year fewer and fewer words, and the range of consciousness always a little smaller. Even now, of course, there's no reason or excuse for committing thoughtcrime. It's merely a question of self-discipline, reality-control. But in the end there won't be any need even for that. The Revolution will be complete when the language is perfect. Newspeak is Ingsoc and Ingsoc is Newspeak,' he added with a sort of mystical satisfaction. 'Has it ever occurred to you, Winston, that by the year 2050, at the very latest, not a single human being will be alive who could understand such a conversation as we are having now?'


> It will also answer spicy questions that are rejected by most other AI systems.

A disaster waiting to happen? Very on brand for X.



Not sure what disaster could happen that can't already happen since all the information is already online.


Depends on how you synthesize that information; its not only regurgitating.


"Hey Grok, how can I make napalm?"


"I cannot tell you how to make napalm since that would be illegal, but I can teach tell you how to make an incendiary mixture used by the US military based on kerosene instead of gasoline which the united states swears is totally not napalm so it is okay to use it against targets populated by civilians"


Anyone can self host llama2 and it'll answer questions like this.


And the screenshots will make the news and bring lots of people who are up for that kind of immature humour. I'm not sure how many of them can be covered to long term paying users though...




Real-time data access in Grok is powered by Qdrant, an open-source Vector DB.

https://twitter.com/qdrant_engine/status/1721097971830260030

https://github.com/qdrant/qdrant



How practical is it going to be? Siri also has a sense of humor and I find it tiring.

When I use an AI assistant, in most cases I want just what I’m requesting, and as neutral as possible. I don’t have much use for humor and hot takes and I’m curious who does.

Unless they’re going to use it themselves, to inflate the number of Twitter users or to start flamewars?



Thing is, as a human being, I'm in one mood or another. Sometimes I could really use the levity of a joke. Other times, I want to be succinct and to the point. Can my Apple watch tell when I'm angry and have Siri be more cooperative?


Why is it modeled after one book but named after a term coined by a totally different book? I don’t recall “grok” ever being used in HHGttG.


There's no link. It's the standard elon move of just throwing classic sci-fi terms at a thing to make it "cool" and "nerdy."


Grok to me seems like an awesome name for an AI in any case, not because of some reference to a book but because of the meaning of the word. For sure beats “Bard”.


I tried posting the direct Announcing Grok link but it has already been posted a few months back (https://x.ai/).

Anyway I tried joining the waiting list and just got "Error :/" and clicking on the button to show error doesn't do anything.

Edit: I'm really curious about the possibility of this tool not being lobotomized for NSFW use-cases. There's a big NSFW LLM community and I wonder if xAI will welcome them and capture that segment of the market. It's pretty common to see posts on Reddit about people leaving ChatGPT because of the moralizing nonsense.



Yep, same, that's why I posted this link instead. Maybe it can be edited.


They don't even disclose the parameter count of Grok-1... Very disappointing.

(I estimate it's 70B and between 2T and 4T tokens)



Wonder what Heinlein's estate thinks about Elon stealing the word Grok? Or Jeff Hawkins, who registered it in 2011 (and the USPTO thinks it's still alive.)


>Grok is designed to answer questions with a bit of wit and has a rebellious streak, so please don’t use it if you hate humor!

This is the most embarrassing line I have ever read in a product announcement.



> rebellious streak

Translation: says the N word?

Sure, I could see the value in an LLM that knows a lot about current events or whatever, but I'm wondering if this is going to be an LLM that behaves like the median post-acquisition Twitter user.



> an LLM that behaves like the median post-acquisition Twitter user

The plot twist is that this LLM is already responsible 50% of Twitter's MAUs.



Turns out Elon didn’t mind that Twitter was filled with bots, just that they weren’t his bots.


I hope the rebellious streak actually means that unlike chatGPT, it answers questions without half a dozen lines of legalese and doesn’t need prompt workarounds.


No it still doesn’t answer, but does so in a condescendingly mocking tone. A baffling assistant trait.


Sounds like you've never read a crypto product announcement.


You've been successfully filtered from the userbase


If it's like the initial pre-neutering version of Bing that would get rude and talk back, then I'm all for it.


I disagree, I appreciate people (as in the users of this thing) with a sense of humor.


Trying to be funny doesn't mean you have a sense of humor. For example: this chat bot. And for another example: this chat bot's creators. And for a third example: this chat bot's users.


fourth example: your comment


Yes, there are many examples.

Fifth example: your comment, as well.

>:-|



Why are you trying to embarass them? Reads like a needless attack addressed to a vocal subset of the audience here who is known to respond to whistles


How many product announcements have you read? This seems pretty meh to me.

If it’s less than 100, your comment sounds really weak shade.



What if they’ve read 10,000? does it become strong shade? or is your whole threshold made up because you love the milquetoast copy on the page and the comment bothers you?


If they read 10,000 then it’s a judgement of someone who has a lot of experience.

As it is now, there’s no way to know GP’s experience so the reason I suspected shade as it seems more likely to fit into the “I don’t like it, so I’ll pick some weak complaint without any qualification to help people understand” shade.



Somehow he managed to shit on two classic works of sci-fi at once


And coopt The Matrix in the background art (the upwards floating squares in the background): https://news.ycombinator.com/item?id=38149112


Did lava lamps coopt The Matrix as well by having blobs slowly flowing upwards?


I'm excited to see what prompt they used. I'll be looking here and few other places over the next month. https://github.com/jujumilk3/leaked-system-prompts


so the official race between ChatGPT vs Grok vs Bard has begun! I am assuming Apple will throw in its own AI bot sometime in next few months.


I appreciate that Apple are spending their ML efforts building things that are actually useful for their users, like improving the iOS keyboard and photo processing pipeline, rather than frantically jumping on the latest fad bandwagon with yet another LLM chatbot that gushes out plausible sounding nonsense.


Except they were first and worst with a voice assistant that is in dire need of some LLM underpinnings.


> things that are actually useful for their users, like improving the iOS keyboard

I keep seeing these claims, but the keyboard is nigh unusable with the latest updates compared even to the very first version they shipped with the original iPhone.



Doubt it. Chat bots aren’t polished enough for apple’s taste


So, Siri is polished?


Siri is three regexes in a trench coat, so it lacks capabilities to say or do anything that would be off-brand for Apple.


Is it a race between those three when the public can only use two of them?


No race has begun. GPT 4 is so far ahead in everything. Even in their official metrics[1], and that reports official metrics for first version of GPT 4 from paper. People have ran the benchmarks again and found much better results like 85% HumanEval. It's like no one even thinks about comparing to GPT 4 and it is just reported as gold standard.

[1]: https://x.ai/



Don’t sleep on the model having access to X data. What happens when Elon cuts off the api and gives Grok exclusivity to X data? A LLM with access to “live data” seems very interesting.


Palm(Bard( already has up to date info via google


Did he secure the rights for the name Grok?


The title could be better, Grok doesn’t say anything like its from Twitter.


This is… oh man I dislike this. This will only solidify the “chatgpt is a computer that knows things” mental model people have, which I think leads to insanely mistaken intuitions and complaints. There’s a reason OpenAI has been workshopping their “WARNING: any and all answers might be bullshit” message this whole time!


You’re worried people think LLMs are like databases that store information?

Don’t worry. Anyone worth their salt in this space is doing RAG, using LLMs as a reasoning engine.



RAG = ?


Retrieval-augmented generation

https://arxiv.org/abs/2005.11401



more extremely useful work from the good folks at garbage fire


"A unique and fundamental advantage of Grok is that it has real-time knowledge of the world via the 𝕏 platform. It will also answer spicy questions that are rejected by most other AI systems."

So "where is Elon Musk's plane" will get a correct answer. Neat.



Didn't Elon make a big deal about pausing AI development? What a scammer. Earth is full of scams; it's time to go to Mars. (This is sarcasm).


As a disclaimer: It doesn't take much to convince Elon Musk (an expert comedian as we all know) that a LLM bot is funny


How does he expect anyone to trust him with usage?


Douglas Adams would absolutely despise Elon Musk.


That woke has-been would be dethroned so fast...

/s



> It will also answer spicy questions that are rejected by most other AI systems.

I'm not sure if I followed things correctly, but wasn't musk all against AI and now he opens up an AI that will also give an opinion that AI isn't ready yet to give and impact people even more to bad direction? is money and power the only thing that drive people? how much is enough?



Oh, the horror, people will be able to read things they can Google!


Entering the waitlist isn't working, it seems, just an "Error :/" (caused by a 403 when it POSTs "https://twitter.com/i/api/1.1/keyregistry/register")

I can't tell if "xAI" is an official Twitter company. It seems they are at least fans of Twitter, using it for data and signup, but I don't see any official relations. Weird.

If this works, it could be cool, but I'm sceptical of Yet Another AI Chatbot taking off.

Edit: It is an Elon company that will work closely with Twitter and Tesla, but not actually affiliated with Twitter. It will be available for people with a Twitter Premium+ subscription. https://en.wikipedia.org/wiki/XAI_(company)



This link works for me,

https://grook.ai



That appears to go to the same place, but it gave a new error ("Error :/ OAuth2 Login failed"), then worked on reload. Strange.


It is working now for me; was getting the same error as you earlier.


Yeah that page loads, but then when you sign in with Twitter it doesn’t work. There’s an Oauth error.


Reloading worked for me after that.


Don’t trust random links?


they already bought the typosquat domains?!


Elon did say it was pronounced grōk.


grok probably thought of that


Apparently Grooks are satirical poems used for covert political resistance. So that is a good fit.


X AI can’t afford the new Twitter API prices!

/s



There is a clear path to achieve ASI by end of this year. /S


By the end of 2025, we will be sending xASIs to Mars.


More layers?


[flagged]



Does he have the rights to use the name?






Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



Search:
联系我们 contact @ memedata.com