• hoppolito@mander.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 hours ago

    One point I would refute here is determinism. AI models are, by default, deterministic. They are made from deterministic parts and “any combination of deterministic components will result in a deterministic system”. Randomness has to be externally injected into e.g. current LLMs to produce ‘non-deterministic’ output.

    There is the notable exception of newer models like ChatGPT4 which seemingly produces non-deterministic outputs (i.e. give it the same sentence and it produces different outputs even with its temperature set to 0) - but my understanding is this is due to floating point number inaccuracies which lead to different token selection and thus a function of our current processor architectures and not inherent in the model itself.

    • nednobbins@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      You’re correct that a collection of deterministic elements will produce a deterministic result.

      LLMs produce a probability distribution of next tokens and then randomly select one of them. That’s where the non-determinism enters the system. Even if you set the temperature to 0 you’re going to get some randomness. The GPU can round two different real numbers to the same floating point representation. When that happens, it’s a hardware-level coin toss on which token gets selected.

      You can test this empirically. Set the temperature to 0 and ask it, “give me a random number”. You’ll rarely get the same number twice in a row, no matter how similar you try to make the starting conditions.