• logicbomb@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    2
    ·
    1 day ago

    The problem with LLMs and other generative AI is that they’re not completely useless. People’s jobs are on the line much of the time, so it would really help if they were completely useless, but they’re not. Generative AI is certainly not as good as its proponents claim, and critically, when it fucks up, it can be extremely hard for a human to tell, which eats away a lot of their benefits, but they’re not completely useless. For the most basic example, give an LLM a block of text and ask it how to improve grammar or to make a point clearer, and then compare the AI generated result with the original, and take whatever parts you think the AI improved.

    Everybody knows this, but we’re all pretending it’s not the case because we’re caring people who don’t want the world to be drowned in AI hallucinations, we don’t want to have the world taken over by confidence tricksters who just fake everything with AI, and we don’t want people to lose their jobs. But sometimes, we are so busy pretending that AI is completely useless that we forget that it actually isn’t completely useless. The reason they’re so dangerous is that they’re not completely useless.

    • ag10n@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      24 hours ago

      It’s almost as if nuance and context matters.

      How much energy does a human use to write a Wikipedia article? Now also measure the accuracy and completeness of the article.

      Now do the same for AI.

      Objective metrics are what is missing, because much of what we hear is “phd-level inference” and it’s still just a statistical, probabilistic generator.

      https://www.pcmag.com/news/with-gpt-5-openai-promises-access-to-phd-level-ai-expertise

    • snooggums@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      23 hours ago

      It is completely useless as presented by the major players who atrocities trying to jam models that are trying to everything at the same time and that is what we always talk about when discussing AI.

      We aren’t talking about focused implementations that are Wikipedia to a certain set of data or designed for specific purposes. That is why we don’t need nuance, although the reminder that we aren’t talking about smaller scale AI used by humans as tools is nice once in a while.