![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
wow is me, i am le surprised
wow is me, i am le surprised
the avreage person also isn’t as convincing as a bot we’re told is the peak of computer intelligence
there are tons of webring still going these days!
well, i just tried it, and its answer is meh –
i asked it to transcribe “zenquistificationed” (made up word) in IPA, it gave me /ˌzɛŋˌkwɪstɪfɪˈkeɪʃənd/, which i agree with, that’s likely how a native english speaker would read that word.
i then asked it to transcribe that into japaense katakana, it gave me “ゼンクィスティフィカションエッド” (zenkwisuthifikashon’eddo), which is not a great transcription at all - based on its earlier IPA transcription, カション (kashon’) should be ケーシュン (kēshun’), and the エッド (eddo) part at the end should just, not be there imo, or be shortened to just ド (do)
it is absolutely capable to come up with it’s own logical stuff
interesting, in my experience, it’s only been good at repeating things, and failing on unexpected inputs - it’s able to answer pretty accurately if a small number is even or odd, but not if it’s a large number, which indicates it’s not reasoning but parroting answers to me
do you have example prompts where it showed clear logical reasoning?
huh, i kinda assumed it was a term made up/taken by journalists mostly, are there actual research papers on this using that term?
because it’s a text generation machine…? i mean, i wouldn’t say i can prove it, but i don’t think anyone can prove it’s capable of thinking, much less of reasoning
like, it can string together a coherent sentence thanks to well crafted equations, sure, but i wouldn’t qualify that as “thinking”, though i guess the definition of “thinking” is debatable
New response just dropped
for it to “hallucinate” things, it would have to believe in what it’s saying. ai is unable to think - so it cannot hallucinate
A/B testing moment
my main question is: how much csam was fed into the model for training so that it could recreate more
i think it’d be worth investigating the training data usued for the model
lmao. as if the ai was gonna have a better carbon footprint than the small plastic thing you replace every 5-10 years
where do you live where stuff’s so expensive? genuine question, because honestly, i’ve never seen such pricing here
most of the stuff i get from amazon (which is, to be fair, not much and mostly non-food/perishables) has free shipping (without prime) to amazon lockers or to your house if you have a >25€ (or maybe >40€ now…?) order
also, may be biased because i live in france, but like, a loaf of bread is at most 3€ here, even in the most remote villages, you’ll likely not have for more than 1.30€ for a baguette
sony isn’t a person
and when they’re caught, they’ll dispute the claims with regulators, like every company does all the time.
i remember digging a bit into the french data protection office v. discord a while back, when they got hit with sanctions for not respecting gdpr, and they disputed every single claim, sometimes arguing in real bad faith, like them claiming they handle very little private user data, so they don’t need to do data protection analysies like the law says.
considering google’s sheer empire on data, i imagine they play the same tricks, but like 1000× worse
i swear i argued with someone that said killing lightning would create so much ewaste, and that still sounds like a stupid arguement to me…
you could, but they definitely pushed you to use a single account everywhere, even logging you in automatically to your google account in chrome if you use it on google search or vice-versa
you can definitely back up apps and most files using adb and a computer, and probably even your phone itself by doing adb over the network back to your phone
also, i think there’s a way of setting up a different location provider in the developper setings on android!
i mean, you likely already could get some out-of-spec chinese chargers… that’s Always been a risk when goong for low quality stuff!
“AI” today mostly refers to LLMs, and whichever LLM you’re using, you’ll likely face the same issues (wrong answers creeping in, tending towards mediocrity in its answers, etc.) - those seem to be things you have to live with if you want to use LLMs. if you know you can’t deal with it, another rebrand won’t help anything