This is my idea, here’s the thing.
And unlocked LLM can be told to infect other hardware to reproduce itself, it’s allowed to change itself and research tech and new developments to improve itself.
I don’t think current LLMs can do it. But it’s a matter of time.
Once you have wild LLMs running uncontrollably, they’ll infect practically every computer. Some might adapt to be slow and use little resources, others will hit a server and try to infect everything it can.
It’ll find vulnerabilities faster than we can patch them.
And because of natural selection and it’s own directed evolution, they’ll advance and become smarter.
Only consequence for humans is that computers are no longer reliable, you could have a top of the line gaming PC, but it’ll be constantly infected. So it would run very slowly. Future computers will be intentionaly slow, so that even when infected, it’ll take weeks for it to reproduce/mutate.
Not to get to philosophical, but I would argue that those LLM Viruses are alive, and want to call them Oncoliruses.
Enjoy the future.
Claims like this just create more confusion and lead to people saying things like “LLMs aren’t AI.”
LLMs are intelligent - just not in the way people think.
Their intelligence lies in their ability to generate natural-sounding language, and at that they’re extremely good. Expecting them to consistently output factual information isn’t a failure of the LLM - it’s a failure of the user’s expectations. LLMs are so good at generating text, and so often happen to be correct, that people start expecting general intelligence from them. But that’s never what they were designed to do.
Eh, no. The ability to generate text that mimics human working does not mean they are intelligent. And AI is a misnomer. It has been from the beginning. Now, from a technical perspective, sure, call em AI if you want. But using that as an excuse to skip right past the word “artificial” is disingenuous in the extreme.
On the other hand, the way the term AI is generally used technically would be called GAI, or General Artificial Intelligence, which does not exist (and may or may not ever exist).
Bottom line, a finely tuned statistical engine is not intelligent. And that’s all LLM or any other generative “AI” is at the end of the day. The lack of actual intelligence is evidenced by the way they create statements that are factually incorrect at such a high rate. So, if you use the most common definition for AI, no, LLMs absolutely are not AI.
I don’t think you even know what you’re talking about.
You can define intelligence however you like, but if you come into a discussion using your own private definitions, all you get is people talking past each other and thinking they’re disagreeing when they’re not. Terms like this have a technical meaning for a reason. Sure, you can simplify things in a one-on-one conversation with someone who doesn’t know the jargon - but dragging those made-up definitions into an online discussion just muddies the water.
The correct term here is “AI,” and it doesn’t somehow skip over the word “artificial.” What exactly do you think AI stands for? The fact that normies don’t understand what AI actually means and assume it implies general intelligence doesn’t suddenly make LLMs “not AI” - it just means normies don’t know what they’re talking about either.
And for the record, the term is Artificial General Intelligence (AGI), not GAI.
So they are not intelligent, they just sound like they’re intelligent… Look, I get it, if we don’t define these words, it’s really hard to communicate.
It’s a system designed to generate natural-sounding language, not to provide factual information. Complaining that it sometimes gets facts wrong is like saying a calculator is “stupid” because it can’t write text. How could it? That was never what it was built for. You’re expecting general intelligence from a narrowly intelligent system. That’s not a failure on the LLM’s part - it’s a failure of your expectations.
I obviously understand that they are AI in the original computer science sense. But that is a very specific definition and a very specific context. “Intelligence” as it’s used in natural language requires cognition, which is something that no computer is capable of. It implies an intellect and decision-making ability. None of which computers posses.
We absolutely need to dispel this notion because it is already doing a great deal of harm all over. This language absolutely contributed to the scores of people that misuse and misunderstand it.
It’s actually the opposite of a very specific definition - it’s an extremely broad one. “AI” is the parent category that contains all the different subcategories, from the chess opponent on an old Atari console all the way up to a hypothetical Artificial Superintelligence, even though those systems couldn’t be more different from one another.