This is my idea, here’s the thing.
And unlocked LLM can be told to infect other hardware to reproduce itself, it’s allowed to change itself and research tech and new developments to improve itself.
I don’t think current LLMs can do it. But it’s a matter of time.
Once you have wild LLMs running uncontrollably, they’ll infect practically every computer. Some might adapt to be slow and use little resources, others will hit a server and try to infect everything it can.
It’ll find vulnerabilities faster than we can patch them.
And because of natural selection and it’s own directed evolution, they’ll advance and become smarter.
Only consequence for humans is that computers are no longer reliable, you could have a top of the line gaming PC, but it’ll be constantly infected. So it would run very slowly. Future computers will be intentionaly slow, so that even when infected, it’ll take weeks for it to reproduce/mutate.
Not to get to philosophical, but I would argue that those LLM Viruses are alive, and want to call them Oncoliruses.
Enjoy the future.
It is so funny that you are all like “that would never work, because there are no such things as vulnerabilities on any system”
Why would I? the whole point is to create a LLM virus, and if the model is good enough, then it is not that hard to create.
Of course vulnerabilities exist. And creating a major one like this for an LLM would likely lead to it destroying things like a toddler (in fact this has already happened to a company run by idiots)
But what it didn’t do was copy-with-changes as would be required to ‘evolve’ like a virus. Because training these models requires intense resources and isn’t just a terminal command.
Who said they need to retrain? A small modification to their weights in each copy is enough. That’s basically training with extra steps.