Wikipedia's founder said he used ChatGPT in the review process for an article and thought it could be helpful. Editors replied to point out it was full of mistakes.
This is another reason why I hate bubbles. There is something potentially useful in here. It needs to be considered very carefully. However, it gets to a point where everyone’s kneejerk reaction is that it’s bad.
I can’t even say that people are wrong for feeling that way. The AI bubble has affected our economy and lives in a multitude of ways that go far beyond any reasonable use. I don’t blame anyone for saying “everything under this is bad, period”. The reasonable uses of it are so buried in shit that I don’t expect people to even bother trying to reach into that muck to clean it off.
Dotcom was, well, a useful thing. I guess valuations were nuts, but it looks like the hate was mostly in the enshittified aftermath that would come.
Crypto is a series of bubbles trying to prop up flavored pyramid schemes for a neat niche concept, but people largely figured that out after they popped. And it’s not as attention grabbing as AI.
Machine Learning is a long running, useful field, but ever since ChatGPT caught investors eyes, the cart has felt so far ahead of the horse. The hate started, and got polarized, waaay before the bubble popping.
…In other words, AI hate almost feels more political than bubble fueled. If that makes any sense. It is a bubble, but the extreme hate would still be there even if it wasn’t.
Crypto was an annoying bubble. If you were in the tech industry, you had a couple of years where people asked you if you could add blockchain to whatever your project was and then a few more years of hearing about NFTs. And GPUs shot up in price. Crypto people promised to revolutionize banking and then get rich quick schemes. It took time for the hype to die down, for people to realize that the tech wasn’t useful, and that the costs of running it weren’t worth it.
The AI bubble is different. The proponents are gleeful while they explain how AI will let you fire all your copywriters, your graphics designers, your programmers, your customer support, etc. Every company is trying to figure out how to shoehorn AI into their products. While AI is a useful tool, the bubble around it has hurt a lot of people.
That’s the bubble side. It also gets a lot of baggage because of the slop generated by it, the way it’s trained, the power usage, the way people just turn off their brains and regurgitate whatever it says, etc. It’s harder to avoid than crypto.
Yeah, you’re right. My thoughts were kinda uncollected.
Though I will argue some of the negatives (like inference power usage) are massively overstated, and even if they aren’t, are just the result of corporate enshittification more than the AI bubble itself.
I believe that the bad behavior of corporate interests is often one of the key contributors to these financial bubbles in every sector where they appear.
To say that some of the bad things about this particular financial bubble are because of a bunch of companies being irresponsible and/or unethical seems not to acknowledge that one is primarily caused by the other.
So… I actually proposed a use case for NLP and LLMs in 2017. I don’t actually know if it was used.
But the usecase was generating large sets of fake data that looked real enough for performance testing enterprise sized data transformations. That way we could skip a large portion of the risk associated with using actual customer data. We wouldn’t have to generate the data beforehand, we could validate logic with it, and we could just plop it in the replica non-prodiction environment.
At the time we didn’t have any LLMs. So it didn’t go anywhere. But it’s always funny when I see all this “LLMs can do x” because I always think about how my proposal was to use it… For fake data.
This is another reason why I hate bubbles. There is something potentially useful in here. It needs to be considered very carefully. However, it gets to a point where everyone’s kneejerk reaction is that it’s bad.
I can’t even say that people are wrong for feeling that way. The AI bubble has affected our economy and lives in a multitude of ways that go far beyond any reasonable use. I don’t blame anyone for saying “everything under this is bad, period”. The reasonable uses of it are so buried in shit that I don’t expect people to even bother trying to reach into that muck to clean it off.
This bubble’s hate is pretty front-loaded though.
Dotcom was, well, a useful thing. I guess valuations were nuts, but it looks like the hate was mostly in the enshittified aftermath that would come.
Crypto is a series of bubbles trying to prop up flavored pyramid schemes for a neat niche concept, but people largely figured that out after they popped. And it’s not as attention grabbing as AI.
Machine Learning is a long running, useful field, but ever since ChatGPT caught investors eyes, the cart has felt so far ahead of the horse. The hate started, and got polarized, waaay before the bubble popping.
…In other words, AI hate almost feels more political than bubble fueled. If that makes any sense. It is a bubble, but the extreme hate would still be there even if it wasn’t.
Crypto was an annoying bubble. If you were in the tech industry, you had a couple of years where people asked you if you could add blockchain to whatever your project was and then a few more years of hearing about NFTs. And GPUs shot up in price. Crypto people promised to revolutionize banking and then get rich quick schemes. It took time for the hype to die down, for people to realize that the tech wasn’t useful, and that the costs of running it weren’t worth it.
The AI bubble is different. The proponents are gleeful while they explain how AI will let you fire all your copywriters, your graphics designers, your programmers, your customer support, etc. Every company is trying to figure out how to shoehorn AI into their products. While AI is a useful tool, the bubble around it has hurt a lot of people.
That’s the bubble side. It also gets a lot of baggage because of the slop generated by it, the way it’s trained, the power usage, the way people just turn off their brains and regurgitate whatever it says, etc. It’s harder to avoid than crypto.
God I had coworkers that had never used a vr headset claiming the metaverse was going to be the next big thing. I wish common sense was common.
“The metaverse” changed it’s definition depending on who you talked to. Some definitions didn’t even include VR.
“AI” also changes it’s definition depending on who you talk to.
Vague definitions = hype
Yeah, you’re right. My thoughts were kinda uncollected.
Though I will argue some of the negatives (like inference power usage) are massively overstated, and even if they aren’t, are just the result of corporate enshittification more than the AI bubble itself.
Even the large scale training is apparently largely useless: https://old.reddit.com/r/LocalLLaMA/comments/1mw2lme/frontier_ai_labs_publicized_100kh100_training/
I believe that the bad behavior of corporate interests is often one of the key contributors to these financial bubbles in every sector where they appear.
To say that some of the bad things about this particular financial bubble are because of a bunch of companies being irresponsible and/or unethical seems not to acknowledge that one is primarily caused by the other.
So… I actually proposed a use case for NLP and LLMs in 2017. I don’t actually know if it was used.
But the usecase was generating large sets of fake data that looked real enough for performance testing enterprise sized data transformations. That way we could skip a large portion of the risk associated with using actual customer data. We wouldn’t have to generate the data beforehand, we could validate logic with it, and we could just plop it in the replica non-prodiction environment.
At the time we didn’t have any LLMs. So it didn’t go anywhere. But it’s always funny when I see all this “LLMs can do x” because I always think about how my proposal was to use it… For fake data.