(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=43840312

Hacker News 上的一篇讨论,起因是有人用 ChatGPT 替代 Google 进行信息检索,引发了褒贬不一的观点。一些人发现,像 Bing CoPilot 和 Google Gemini 这样的 AI 工具能更有效地直接给出答案,避免了充斥广告的搜索结果。Perplexity 因为提供链接来源而受到好评,增强了可信度。但人们也担心 AI 的可靠性问题,以及其可能传播虚假信息,甚至未来可能像 SEO 一样被操纵。许多人承认 AI 在解释和教程方面很有用,但强调了验证信息的重要性。另一些人则担心个性化“真相”的商业化及其对社会的影响。这场讨论突显了互联网交互方式的转变,AI 作为一种潜在的新界面,但同时也强调了在 AI 驱动信息时代进行批判性评估和核实来源的必要性。

相关文章
  • (评论) 2025-04-08
  • (评论) 2025-03-26
  • (评论) 2025-03-29
  • (评论) 2025-04-09
  • (评论) 2025-04-28

  • 原文
    Hacker News new | past | comments | ask | show | jobs | submit login
    I've largely replaced Google with ChatGPT for looking things up (twitter.com/paulg)
    34 points by nomilk 42 minutes ago | hide | past | favorite | 32 comments










    I could equally say I've largely replaced Google search with Google Gemini.

    The Gemini product seems to be evolving better and faster than chatgpt. Probably doing so cheaper, too, given they have their own hardware.

    I am pleasently surprised how Gemini went from bad to quite good in less than a year.



    I hate peoples unrealistic expectations of AI but also find Bing CoPilot to be really useful.

    Instead of structuring a Google query in an attempt to find a relevant page filled with ads, I just ask Copilot and it gives a fully digested answer that satisfied my question.

    What surprises me is that it needs very little context.

    If I ask ‘ Linux "sort" command line for sorting the third column containing integers’, it replies with “ sort -k3,3n filename” along with explanations and extensions for tab separated columns.



    > Bing CoPilot

    Tangent: it annoys me so much that there's a persistent useless tiny horizontal scroll on the Bing page! when scrolling down, it rattles the page left and right.



    > If I ask ‘ Linux "sort" command line for sorting the third column containing integers’,

    Wow, that's actually quite a lot. You can also just say "sort 3rd col nix."



    I understand that they are products from different genereations, but there's also a incumbent/contender effect. Google's main goal isn't to grow or provide the best quality, but to monetize, while ChatGPT is early in its lifespan and focuses on losing money to gain users, and doesn't monetize a lot yet (no ads, at-cost or at-loss subscriptions).

    Another effect is that Google has already been targetted for a lot of abuse, with SEO, backlinks, etc.. ChatGPT did not yet have many successful attempts at manipulating the essence of the mechanism to advertise by third parties.

    Finally, ChatGPT is designed to parasite off of the Web/Google, it shows the important information without the organic ads. If a law firm puts out information on the law and real estate regulations, Google shows the info, but also the whole website for the law firm and the logo and motto and navbar with other info and Call To Actions. ChatGPT cuts all that out.

    So while I don't deny that there is a technological advancement, there is a major component to quality here that is just based of n+1 white-label parasitism.



    Is it possible that the nature of deep learning completely negates SEO? I think SEO will be reinfected by OpenAI intentionally rather than it being a cat and mouse game.


    Remember those gan demos where a single carefully chosen pixel change turns a cat classification to dog? It would be really surprising if people aren’t working out how to do similar things that turn your random geography question into hotel adverts as we speak.

    At least it seems likely to be more expensive for attackers than the last iteration of the spam arms race. Whether or to what extent search quality is actually undermined by spammers vs Google themselves is a matter for some debate anyway



    I don’t look forward to the opposite - SEO infecting AI so that its output start containing product placement!


    I don't know enough about how ChatGPT et al determine what it is or isn't credible. SEO was a bit of a hack but Google has done an ok job of keeping one step ahead of the spammers. It's only a matter of time before nefarious parties learn how to get chatgpt to trust their lies if they haven't already.


    Google is turning into Alta Vista right before our eyes!

    I have replaced Google with Perplexity. It backs up every answer with links, so I find it to be more trustable than ChatGPT.

    Perplexity also keeps their index current, so you're not getting answers from the world as it existed 10 months ago. (ChatGPT says its knowledge cutoff is June 2024, Perplexity says its search index includes results up to April 29, 2025, and to prove it, it gives me that latest news.)



    As a search vehicle, absolutely. But they aren't sitting idly by - as much as I personally hate it, a lot of people like Gemini. Either way, it does spell a big danger to their primary source of revenue.

    What's interesting is the monetization aspect. Right now none of them act as ad vehicles. Who will be the first to fall?



    Ads are not the only challenge. Cost-per-query is also a challenge, as an LLM query is 10x more expensive than a search-index query.


    I'm pretty sure Perplexity attempted to place ads within their responses but had a large backlash.


    Why not use Gemini which is as good or better and has a more recent knowledge cutoff?


    People are just comfy with what they started with.


    I did this today and Grok led me to make an embarrassingly wrong comment (Grok stated "Rust and Java, like C/C++ mentioned by Sebastian Aaltonen, leave argument evaluation order unspecified" - I now know this is wrong and both are strict left-to-right). ChatGPT gets it correct. But I think we're still in the "check the references" stage.


    Asking AI is like phoning a friend and catching them out at a bar. Maybe they’re sober enough to give good info, but don’t take it as gospel.


    True. I use Kagi which supports its facts with citations. More than once I have read the cited material to find no trace of the so called facts.


    If the only thing it did was give me references that's already a leg up


    I did this as well for a while, but have actually been impressed with how helpful the Google AI summaries have become. Now I'm back to a hybrid 50% pure LLM, 50% Google approach.

    (and I use Google's Gemini for 50% of my pure LLM requests)



    I use the various chatbots when I want a tutorial (e.g. math or some programming library). I can (in fact, must) verify these myself. This is strictly better than doing a bunch of queries and reading a bunch of blogs. I also use them for the class of queries where I don't even know how to begin: ("there is some expression in southern-US english involving animals, about being dumb, that sounds like '$blah', but isn't that. what might it be?") Chatbots are great for that stuff.

    Chatbots are absolute trash when it comes to needing factual information that cannot be trivially verified. I include the various "deep research" tools -- they are useless, except maybe as a starting point. For every problem I've given them, they've just been wrong. Not even close. The people who rely on these tools, it seems to me, are the same sort of folks who read newspaper headlines and Gartner 'reasearch reports' and accept the conclusions at face value.

    For anything else, it's just easier to write a search query. The internet is wrong too, but it's easier for me to cross-validate answers across 10 blue links than to figure it out via socratic method with a robot.



    I find AI to be mainly helpful at explaining new topics (precision not essential), but I don't trust exact facts and figures given by AI because of hallucination issues.

    Maybe I just don't have the right ChatGPT++ subscription.



    If Google can somehow undo literally everything about SEO, then it can become useful again


    > If Google can somehow undo literally everything about SEO, then it can become useful again

    So, Google should De-Google itself?



    Yep, same. AI is the next form of interaction with the Internet. With websites serving as books did previously.


    Even though I understand LLMs penchant for hallucination, I tend to trust them more than I should when I am busy and/or dealing with "non-critical" information.

    I'm wondering if ChatGPT (and similar products) will mimic social media as a vector of misinformation and confirmation bias.

    LMMs are very clearly not that today, but social media didn't start out anything like the cesspool it has become.

    There are a great many ways that being the trusted source of customized, personalized truth can be monetized, but I think very few would be good for society.



    I’m curious what he’s looking up and does he double check his sources? As we gradually move more and more into AI I think there’s going to be some weird impacts of information being more strictly curated, and I wonder how “AI-think” will start to impact the public square.


    I don't doubt that ChatGPT is better than Google for looking things up.

    I also don't think ChatGPT is very reliable for looking things up. I think Google has just been degraded so far as a product that it is near worthless for anything more than the bottom 40% of scenarios.



    Indeed, also no reason to go back to Google.

    I wish there was a free Gmail alternative (if there's is lmk!).



    Outlook

    (also, there is no such thing as a free lunch https://en.wikipedia.org/wiki/No_such_thing_as_a_free_lunch)



    > but it hasn't changed anything about what I write.

    I think most authors would argue the same thing, but it's really up to the readers to decide isn't it?



    Curious thought, thanks for sharing.






    Join us for AI Startup School this June 16-17 in San Francisco!


    Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



    Search:
    联系我们 contact @ memedata.com