College professors are going back to paper exams and handwritten essays to fight students using ChatGPT::The growing number of students using the AI program ChatGPT as a shortcut in their coursework has led some college professors to reconsider their lesson plans for the upcoming fall semester.
If AI was ‘intelligent’, it wouldn’t have written me a set of instructions when I asked it how to inflate a foldable phone. Seriously, check my first post on Lemmy…
https://lemmy.world/post/1963767
An intelligent system would have stopped to say something like “I’m sorry, that doesn’t make any sense, but here are some related topics to help you”
AI doesn’t necessitate a machine even being capable of stringing the complex English language into a series of steps towards something pointless and unattainable. That in itself is remarkable, however naive it may be in believing you that a foldable phone can be inflated. You may be confusing AI for AGI, which is when the intelligence and reasoning level is at or slightly greater than humans.
The only real requirement for AI is that a machine take actions in an intelligent manner. Web search engines, dynamic traffic lights, and Chess bots all qualify as AI, despite none of them being able to tell you rubbish in proper English
There’s the rub: defining “intelligent”.
If you’re arguing that traffic lights should be called AI, then you and I might have more in common than we thought. We both believe the same things: that ChatGPT isn’t any more “intelligent” than a traffic light. But you want to call them both intelligent and I want to call neither so.
I think you’re conflating “intelligence” with “being smart”.
Intelligence is more about taking in information and being able to make a decision based on that information. So yeah, automatic traffic lights are “intelligent” because they use a sensor to check for the presence of cars and “decide” when to switch the light.
Acting like some GPT is on the same level as a traffic light is silly though. On a base level, yes, it “reads” a text prompt (along with any messaging history) and decides what to write next. But that decision it’s making is much more complex than “stop or go”.
I don’t know if this is an ADHD thing, but when I’m talking to people, sometimes I finish their sentences in my head as they’re talking. Sometimes I nail it, sometimes I don’t. That’s essentially what chatGPT is, a sentence finisher that happened to read a huge amount of text content on the web, so it’s got context for a bunch of things. It doesn’t care if it’s right and it doesn’t look things up before it says something.
But to have a computer be able to do that at all?? That’s incredible, and it took over 50 years of AI research to hit that point (yes, it’s been a field in universities for a very long time, with most that time people saying it’s impossible), and we only hit it because our computers got powerful enough to do it at scale.
deleted by creator
Where does that come from? A better gauge for intelligence is whether someone or something is able to resolve a problem that they did not encounter before. And arguably all current models completely sucks at that.
I also think the word “AI” is used quite a bit too liberal. It confuses people who have zero knowledge on the topic. And when an actual AI comes along we will have to make up a new word because “general artificial intelligence” won’t be distinctive enough for corporations to market their new giant leap in technology….
I would suggest the textbook Artificial Intelligence: A Modern Approach by Russell and Norvig. It’s a good overview of the field and has been in circulation since 1995. https://en.m.wikipedia.org/wiki/Artificial_Intelligence:_A_Modern_Approach
Here’s a photo, as an example of how this book approaches the topic, in that there’s an entire chapter on it with sections on four approaches, and that essentially even the researchers have been arguing about what intelligence is since the beginning.
But all of this has been under the umbrella of AI. Just because corporations have picked up on it, doesn’t invalidate the decades of work done by scientists in the name of AI.
My favourite way to think of it is this: people have forever argued whether or not animals are intelligent or even conscious. Is a cat intelligent? Mine can manipulate me, even if he can’t do math. Are ants intelligent? They use the same biomechanical constructs as humans, but at a simpler scale. What about bacteria? Are viruses alive?
If we can create an AI that fully simulates a cockroach, down to every firing neuron, does it mean it’s not AI just because it’s not simulating something more complex, like a mouse? Does it need to exceed a human to be considered AI?
Intelligence (in a biological sense) is defined differently from how computer scientists approach describing artificial intelligence. “Making a decision based on information” is not a criteria sufficient to declare something is intelligent in a biological sense. But that’s what a lot of people (wrongly) assume when they hear artificial “intelligence”.
To describe an AI as intelligent, as understood in natural science, you obviously can’t use the criteria applied in computer science. There is broad consensus in biological science that animals have intelligence. Just the scope of intelligence is heavily discussed, or rather, which level of intelligence each of them reach.
Viruses are not considered lifeforms, btw. Naturally, there is no 100 % answer on anything in science. But that shouldn’t be confused with there being no substance to these answers.
I’m with you on this and think the AI label is just stupid and misleading. But times/language change and you end up being a Don Quixote type figure.
If it shouldn’t be called AI or not no idea…
But some humans are intel·ligent and let’s be clear…say crazier things…
Inflating a phone is super easy though!
Overheat the battery. ;) Phone will inflate itself!
If “making sense” was a requirement of intelligence… there would be no modern art museums.
Instructions unclear, inflates phone.
Seriously, if it was actually intelligent, yet also writing out something meant to be considered ‘art’, I’d expect it to also have a disclaimer at the end declaring it as satire.
That would require a panel of AIs to discuss whether “/s” or “no /s”…
As it is, it writes anything a person could have written, some of it great, some of it straight from Twitter. We are supposed to presume at least some level of intelligence for either.