• conciselyverbose@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    3 months ago

    Alex demonstrated that ChatGPT was lying intentionally

    No, he most certainly did not. LLMs have no agency. “Intentionally” doing anything isn’t possible.

    • UraniumBlazer@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      8
      ·
      3 months ago

      LLMs have no agency.

      Define “agency”. Why do u have agency but an LLM doesn’t?

      “Intentionally” doing anything isn’t possible.

      I see “intention” as a goal in this context. ChatGPT explained that the goal was to make the conversation appear “natural” (which means human like). This was the intention/goal behind it lying to Alex.

      • Zeoic@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        3 months ago

        That “intention” is not made by ChatGPT, though. Their developers intend for conversation with the LLM to appear natural.

        • UraniumBlazer@lemm.eeOP
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          3
          ·
          3 months ago

          ChatGPT says this itself. However, why does an intention have to be made by ChatGPT itself? Our intentions are often trained into us by others. Take the example of propaganda. Political propaganda, corporate propaganda (advertisements) and so on.

          • Zeoic@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            3 months ago

            We have the ability to create our own intentions. Just because we follow others sometimes doesn’t change that.

            Also, if you wrote “I am conscious” on a piece of paper, does that mean the paper is conscious? Does this paper now have the intent to have a natural conversation with you? There is not much difference between that paper and what chatgpt is doing.

            • UraniumBlazer@lemm.eeOP
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              3 months ago

              The main problem is the definition of what “us” means here. Our brain is a biological machine guided by the laws of physics. We have input parameters (stimuli) and output parameters (behavior).

              We respond to stimuli. That’s all that we do. So what does “we” even mean? The chemical reactions? The response to stimuli? Even a worm responds to stimuli. So does an amoeba.

              There sure is complexity in how we respond to stimuli.

              The main problem here is an absent objective definition of consciousness. We simply don’t know how to define consciousness (yet).

              This is primarily what leads to questions like u raised right now.