• nyan@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    57
    arrow-down
    1
    ·
    2 months ago

    The problem isn’t that it didn’t. The problem is that anyone thought that it should have.

  • kat_angstrom@lemmy.world
    link
    fedilink
    English
    arrow-up
    52
    arrow-down
    3
    ·
    2 months ago

    It’s not AI, it’s an LLM. It doesn’t know what misinformation is because it doesn’t Know anything

  • makingStuffForFun@lemmy.ml
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    2 months ago

    Most things I ask it give me back a fever dream. You’re over thinking the current state of the tech. Give it another election cycle.

    • JeffKerman1999@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      I just ask it boilerplate code and it’s ok. I don’t like having to write a million times the same shit

  • CosmoNova@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 months ago

    It’s always refreshing to read reasonable comments to a nonsensical headline, but I do wonder why it even shows up in my feed when it has so many downvotes.

  • rumba@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    Lol GPT vs Copilot were in stark contrast…

    I think the journalists should just try to stick to things they understand. They probably ran a single query and it failed so they kept going on the same conversation.

    Sometimes the difference between a good answer and a bad answer is two or three attempts.

    It’s not like LLM’s are particularly good at sussing out lies anyway. It’s like summarize the concepts in the article than do web searches on each one trying to find an answer. It’s a fairly expensive query that they’re honestly going to try to avoid if they can.