This is my idea, here’s the thing.

And unlocked LLM can be told to infect other hardware to reproduce itself, it’s allowed to change itself and research tech and new developments to improve itself.

I don’t think current LLMs can do it. But it’s a matter of time.

Once you have wild LLMs running uncontrollably, they’ll infect practically every computer. Some might adapt to be slow and use little resources, others will hit a server and try to infect everything it can.

It’ll find vulnerabilities faster than we can patch them.

And because of natural selection and it’s own directed evolution, they’ll advance and become smarter.

Only consequence for humans is that computers are no longer reliable, you could have a top of the line gaming PC, but it’ll be constantly infected. So it would run very slowly. Future computers will be intentionaly slow, so that even when infected, it’ll take weeks for it to reproduce/mutate.

Not to get to philosophical, but I would argue that those LLM Viruses are alive, and want to call them Oncoliruses.

Enjoy the future.

  • 🍉 Albert 🍉@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    15
    ·
    16 hours ago

    They are fancy autocomplete, I know.

    They just need to be good enough to copy themselves, once they do, it’s natural selection. And it’s out of our control.

    • expr@programming.dev
      link
      fedilink
      English
      arrow-up
      18
      ·
      15 hours ago

      What does that even mean? It’s gibberish. You fundamentally misunderstand how this technology actually works.

      If you’re talking about the general concept of models trying to outcompete one another, the science already exists, and has existed since 2014. They’re called Generative Adversarial Networks, and it is an incredibly common training technique.

      It’s incredibly important not to ascribe random science fiction notions to the actual science being done. LLMs are not some organism that scientists prod to coax it into doing what they want. They intentionally design a network topology for a task, initialize the weights of each node to random values, feed in training data into the network (which, ultimately, is encoded into a series of numbers to be multiplied with the weights in the network), and measure the output numbers against some criteria to evaluate the model’s performance (or in other words, how close the output numbers are to a target set of numbers). Training will then use this number to adjust the weights, and repeat the process all over again until the numbers the model produces are “close enough”. Sometimes, the performance of a model is compared against that of another model being trained in order to determine how well it’s doing (the aforementioned Generative Adversarial Networks). But that is a far cry from models… I dunno, training themselves or something? It just doesn’t make any sense.

      The technology is not magic, and has been around for a long time. There’s not been some recent incredible breakthrough, unlike what you may have been led to believe. The only difference in the modern era is the amount of raw computing power and sheer volume of (illegally obtained) training data being thrown at models by massive corporations. This has led to models that have much better performance than previous ones (performance, in this case, meaning "how close does it sound like text a human would write?), but ultimately they are still doing the exact same thing they have been for years.

      • 🍉 Albert 🍉@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        9
        ·
        15 hours ago

        They don’t need to outcompete one another. Just outcompete our security.

        The issue is once we have a model good enough to do that task, the rest is natural selection and will evolve.

        Basically, endless training against us.

        The first model might be relatively shite, but it’ll improve quickly. Probably reaching a plateau, and not a Sci fi singularity.

        I compared it to cancer because they are practicality the same thing. A cancer cell isn’t intelligent, it just spreads and evolves to avoid being killed, not because it has emotions or desires, but because of natural selection.

        • expr@programming.dev
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          1
          ·
          edit-2
          13 hours ago

          Again, more gibberish.

          It seems like all you want to do is dream of fantastical doomsday scenarios with no basis in reality, rather than actually engaging with the real world technology and science and how it works. It is impossible to infer what might happen with a technology without first understanding the technology and its capabilities.

          Do you know what training actually is? I don’t think you do. You seem to be under the impression that a model can somehow magically train itself. That is simply not how it works. Humans write programs to train models (Models, btw, are merely a set of numbers. They aren’t even code!).

          When you actually use a model: here’s what’s happening:

          1. The interface you are using takes your input and encodes it as a sequence of numbers (done by a program written by humans)
          2. This sequence of numbers (known as a vector, in mathematics) is multiplied by the weights of the model (organized in a matrix, which is basically a collection of vectors), resulting in a new sequence of numbers (the output vector) (done by a program written by humans).
          3. This output vector is converted back into the representation you supplied (so if you gave a chatbot some text, it will turn the numbers into the equivalent textual representation of said numbers) (done by a program written by humans).

          So a “model” is nothing more than a matrix of numbers (again, no code whatsoever), and using a model is simply a matter of (a human-written program) doing matrix multiplication to compute some output to present the user.

          To greatly simplify, if you have a mathematical function like f(x) = 2x + 3, you can supply said function with a number to get a new number, e.g, f(1) = 2 * 1 + 3 = 5.

          LLMs are the exact same concept. They are a mathematical function, and you apply said function to input to produce output. Training is the process of a human writing a program to compute how said mathematical function should be defined, or in other words, the exact coefficients (also known as weights) to assign to each and every variable in said function (and the number of variables can easily be in the millions).

          This is also, incidentally, why training is so resource intensive: repeatedly doing this multiplication for millions upon millions of variables is very expensive computationally and requires very specialized hardware to do efficiently. It happens to be the exact same kind of math used for computer graphics (matrix multiplication), which is why GPUs (or other even more specialized hardware) are so desired for training.

          It should be pretty evident that every step of the process is completely controlled by humans. Computers always do precisely what they are told to do and nothing more, and that has been the case since their inception and will always continue to be the case. A model is a math function. It has no feelings, thoughts, reasoning ability, agency, or anything like that. Can f(x) = x + 3 get a virus? Of course not, and the question is a completely absurd one to ask. It’s exactly the same thing for LLMs.

    • just_another_person@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      15 hours ago

      Copy themselves to what? Are you aware of the basic requirements a fully loaded model needs to even get loaded, let alone run?

      This is not how any of this works…

      • 🍉 Albert 🍉@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        12
        ·
        15 hours ago

        It’s funny how I simplified it, and you complain by listing those steps.

        And they are not as much as you think.

        You can run it on a cpu, on a normal pc, it’ll be slow, but it’ll work.

        A slow liron could run in the background of a weak laptop and still spread itself.

    • davidgro@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      13 hours ago

      If you know that it’s fancy autocomplete then why do you think it could “copy itself”?

      The output of an LLM is a different thing from the model itself. The output is a stream of tokens. It doesn’t have access to the file systems it runs on, and certainly not the LLM’s own compiled binaries (or even less source code) - it doesn’t have access to the LLM’s weights either. (Of course it would hallucinate that it does if asked)

      This is like worrying that the music coming from a player piano might copy itself to another piano.

      • 🍉 Albert 🍉@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        3
        ·
        7 hours ago

        Give it access to the terminal and copying itself is trivial.

        And your example doesn’t work, because that is the literal original definition of a meme and if you read the original meaning, they are sort of alive and can evolve by dispersal.

        • davidgro@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          5 hours ago

          Why would someone direct the output of an LLM to a terminal on its own machine like that? That just sounds like an invitation to an ordinary disaster with all the ‘rm -rf’ content on the Internet (aka training data). That still wouldn’t be access on a second machine though, and also even if it could make a copy, it would be an exact copy, or an incomplete (broken) copy. There’s no reasonable way it could ‘mutate’ and still work using terminal commands.

          And to be a meme requires minds. There were no humans or other minds in my analogy. Nor in your question.

          • 🍉 Albert 🍉@lemmy.worldOP
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            4 hours ago

            It is so funny that you are all like “that would never work, because there are no such things as vulnerabilities on any system”

            Why would I? the whole point is to create a LLM virus, and if the model is good enough, then it is not that hard to create.

            • davidgro@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 hours ago

              Of course vulnerabilities exist. And creating a major one like this for an LLM would likely lead to it destroying things like a toddler (in fact this has already happened to a company run by idiots)

              But what it didn’t do was copy-with-changes as would be required to ‘evolve’ like a virus. Because training these models requires intense resources and isn’t just a terminal command.

              • 🍉 Albert 🍉@lemmy.worldOP
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 hours ago

                Who said they need to retrain? A small modification to their weights in each copy is enough. That’s basically training with extra steps.

    • forrgott@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      edit-2
      13 hours ago

      Sorry, no LLM is ever going to spontaneously gain the abilities self-replicate. This is completely beyond the scope of generative AI.

      This whole hype around AI and LLMs is ridiculous, not to mention completely unjustified. The appearance of a vast leap forward in this field is an illusion. They’re just linking more and more processor cores together, until a glorified chatbot can be made to appear intelligent. But this is struggling actual research and innovation in the field, instead turning the market into a costly, and destructive, arms race.

      The current algorithms will never “be good enough to copy themselves”. No matter what a conman like Altman says.

      • 🍉 Albert 🍉@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        3
        ·
        7 hours ago

        It’s a computer program, give it access to a terminal and it can “cp” itself to anywhere in the filesystem or through a network.

        “a program cannot copy itself” have you heard of a fork bomb? Or any computer virus?