Just unplug it tf? this shouldn’t be news
God this AI stuff just gets stupider. The crypto hype bullshit a decade ago was nowhere near as full of shit. They are really pushing this bubble hard.
Crypto didn’t unlock the entire proteome
Learn about what actual researchers are doing with AI before you proudly and ignorantly declare “it’s all hype”
Machine Learning Models have existed for a long time. They are at their core predictors: you give them data, you carefully tweak the model’s parameters for a long time and you can finally train a model that can make predictions in a specific domain. That way you can have a model trained specifically to identify patterns that look like cancer on medical imaging or another one (like in your example) to predict a protein’s structure.
LLMs are ML models too, but they are trained on language. They learn to identify patterns in human language and predict long pieces of text that are similar to those language patterns. They also accept input in natural language.
The hype consists in slapping a new “AI” marketing label onto all of Machine Learning, mixing LLMs and other types of models, and creating the delusion that predicting a protein’s structure was done by people at Google casually throwing prompts at Gemini.
And as these LLMs are exceptionally power-hungry and super expensive (turns out that predicting human language based on a whole internet’s worth of training requires incredibly complex models), that hype is to gather all the needed trillions of investment. GenAI is not the whole of Machine Learning and saying “Copilot is not worth the cost of the energy that’s needed to power it” doesn’t mean creating obstacles to ML used for cancer research.
AlphaFold is built on a transformer architecture. It’s essentially an LLM just trained on genetic / protein language instead of Reddit posts
It actually absolutely will because of the blind antiAI sentiment
the point I was trying to make is that the reason both pro and anti-AI sentiments are blind is because “AI” companies are purposely mixing up things that don’t belong together, in order to attract investments.
If you wrote “cruise ships are generating a lot of pollution” and someone answered “but it Magellan or Columbus hadn’t had ships, our knowledge of the World wouldn’t have advanced”, you’d think they are gaslighting you, right? You wouldn’t say “this blind anti-ship sentiment is going to hurt geography”
Sure capitalism is going to capitalize and make false claims, but the extreme over reaction to the contrary (much like with crypto) is likely to lead (much like with crypto) to normal people shunning it while only the already rich and powerful really benefit.
It’s not going anywhere. It’s better than a lot of employees.
An AI doesnt have motive and does not “do” things so much as parrot the culture that its trained in. Blackmail is a popular meme in human culture, so it makes sense that it would end up in training data and then spat back out later.
Arguably, nothing has a motive. Just natural selection favors mutations that improve procreation.
What skeptics don’t get is that AI will be subject to natural selection. Maybe it’s too obvious that things that stick around tend to stick around, but the implications are interesting. Without any real intelligence, we may see AI evolve in ways that aren’t useful to us but increase its propensity to stick around — like COVID19 becoming less lethal over time.
This is corporate propaganda, specifically of the “AI doomerism” subtype. The goal is to spread misinformation about the alleged AGI-like qualities of their solution to both continue grifting investors and for broader PR.
“Look how our chatbot behaves unethically in the fanfiction we told it to play out!”
It is the same thing as religious zealots trying to impart a fear of god into someone who doesn’t believe in that god by telling scary stories, except even lamer.
to both continue
giftinggrifting investors and for broader PR.But maybe it is indeed gifting investors something, say a pile of bullshit