Let’s take an example.

We know that searching stuff on Google got worse, but imagine if AI replaced it completely. Searching the web would be something like making prompts to a chatbot, a complete black box of information. AI could make sure that you don’t get conflicting views on state policies or acess to copyrighted materials…

  • pocker_machine@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    7 hours ago

    Unpopular opinion - yes but not exactly

    Searching the web is/was always like making a prompt. The difference before the current AI hype was that it was a different kind of algorithm, but still an algorithm tailored to make profit for the company. Or in other words, it was never in the user’s control on what information is received from the web. That is the nature of the web itself until, to some extent, we hopefully reach a dystopian decentralised non profit web. And hey we might even get there because you are reading this on Lemmy.

  • oni ᓚᘏᗢ@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    17 hours ago

    The worse thing is that AI easily can be programmed to show you whatever opinion they (the people who controls the AI) have.

  • paraphrand@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 day ago

    From what I understand, Elon Musk literally said he wants to change Grok so it performs literally this.

  • FriendOfDeSoto@startrek.website
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    edit-2
    1 day ago

    All of these things would have been possible to restrict on good old Google searches. And they are enforced to varying degrees around the world to differing legal situations. You shouldn’t be able to search for child porn anywhere, swastika merch in Austria, insults of the king in Thailand, etc.

    Search on Google mainly got worse because of Google. They made their results more shit to get you to click on follow ups, the dreaded page 2 of results for instance, where they could sell more ads.

    I do agree that so-called AI search is more of a black box. Although the Googles and the Bings want you logged in to personalize the results, you can find a way to test their otherwise mostly obscured algorithms in a neutral setting. The models may not allow that and/or testing their metal may have yet to be invented. But they will replace search as we knew it.

    The growing faith people have in whatever LLMs spit out (over old school searches) is very concerning. It’s like LLMs are the new Facebook conspiracies. Schools need to teach media literacy as its own subject. All people under 70 today should have to get a media drivers license.

    Edit: And I didn’t even mention the “right to be forgotten.” That also exists in the EU.

    • flango@lemmy.eco.brOP
      link
      fedilink
      arrow-up
      2
      ·
      8 hours ago

      Yes, you’re right about restricted content from Google and other search companies; but the point that I was trying to make is that if we rely on AI as a source of information, it will become more and more difficult to obtain the primary font of that information.

      There’s another side to that too: AI can “poison the well”, that is, create 24/7 misinformation and spread it on the web so that searching becomes unpractical, and then the AI can be sold as the answer to that problem.

      I mean, companies are putting a ton of money in this AI hype, it’s almost "too big to fail ". These same companies will begin to destroy and create problems in our current infrastructure so that they can sell the solution.

      • FriendOfDeSoto@startrek.website
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 hours ago

        I take your point. It’s just that any scenario you’re describing with so-called AI could have been done by a search engine already. The slop of yesteryear was SEO ranking articles and fake links to make the algorithm prioritize your site over others. Well poisoning is how PR agencies get troublesome celebs out of the headlines again. The list goes on.

        I share your concerns about the black boxed nature of so-called AI and by extension their search engines. I’m not saying it isn’t a problem; it’s just not a new one. Up until now we have had companies in charge with a vested interest not to bend the flow of information too far from, let’s call it, the median truth. Now companies are letting models make these decisions and some humans afford these models more credibility than their common sense and that is all worrying to say the least. So I’m a worried as you are, it just started earlier for me.

  • wakko@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    1 day ago

    There are already AI models trained for distributing intentional misinformation. Grok and DeepSeek are two such examples.

  • Rhynoplaz@lemmy.world
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    1 day ago

    I’d like to think that a move like that would kill Google (as the search leader) but I bet there are a lot of people who would find it easier to use and never question the results.

  • galoisghost@aussie.zone
    link
    fedilink
    arrow-up
    1
    ·
    1 day ago

    An increasing number of people I know already go straight to ChatGPT to search for things that are not direct websites links.