• 0 Posts
  • 98 Comments
Joined 1 year ago
cake
Cake day: August 2nd, 2023

help-circle



  • Your description is how pre-llm chatbots work

    Not really we just parallelized the computing and used other models to filter our training data and tokenize them. Sure the loop looks more complex because of parallelization and tokenizing the words used as inputs and selections, but it doesn’t change what the underlying principles are here.

    Emergent properties don’t require feedback. They just need components of the system to interact to produce properties that the individual components don’t have.

    Yes they need proper interaction, or you know feedback for this to occur. Glad we covered that. Having more items but gating their interaction is not adding more components to the system, it’s creating a new system to follow the old. Which in this case is still just more probability calculations. Sorry, but chaining probability calculations is not gonna somehow make something sentient or aware. For that to happen it needs to be able to influence its internal weighting or training data without external aid, hint these models are deterministic meaning no there is zero feedback or interaction to create Emergent properties in this system.

    Emergent properties are literally the only reason llms work at all.

    No llms work because we massively increased the size and throughput of our probability calculations, allowing increased precision on the predictions, which means they look more intelligible. That’s it. Garbage in garbage out still applies, and making it larger does not mean that this garbage is gonna magically create new control loops in your code, it might increase precision as you have more options to compare and weight against but it does not change the underlying system.


  • No the queue will now add popular Playlists to what you were listening to when you restart the app if your previous queue was a generated one. Not sure the exact steps to cause it but it seems like if you were listening to a daily Playlist close the app, the next day the Playlist has updated and instead of pointing to the new daily it decides to point to one of the popular Playlist for your next songs in queue. It doesn’t stop the song you paused on it just adds new shit to the queue after it once it loses track of where to point. Seems like they should just start shuffling your liked songs in that case but nope it points to a random pop Playlist.




  • If you give it 10 statements, 5 of which are true and 5 of which are false, and ask it to correctly label each statement, and it does so, and then you negate each statement and it correctly labels the negated truth values, there’s more going on than simply “producing words.”

    It’s not more going on, it’s that it had such a large training set of data that these false vs true statements are likely covered somewhere in it’s set and the probability states it should assign true or false to the statement.

    And then look at that your next paragraph states exactly that, the models trained on true false datasets performed extremely well at performing true or false. It’s saying the model is encoding or setting weights to the true and false values when that’s the majority of its data set. That’s basically it, you are reading to much into the paper.


  • AI has been a thing for decades. It means artificial intelligence, it does not mean that it’s a large language model. A specially designed system that operates based on predefined choices or operations, is still AI even if it’s not a neural network and looks like classical programming. The computer enemies in games are AI, they mimick an intelligent player artificially. The computer opponent in pong is also AI.

    Now if we want to talk about how stupid it is to use a predictive algorithm to run your markets when it really only knows about previous events and can never truly extrapolate new data points and trends into actionable trades then we could be here for hours. Just know it’s not an LLM and there are different categories for AI which an LLM is it’s own category.


  • Do you understand how they work or not? First I take all human text online. Next, I rank how likely those words come after another. Last write a loop getting the next possible word until the end line character is thought to be most probable. There you go that’s essentially the loop of an LLM. There are design elements that make creating the training data quicker, or the model quicker at picking the next word but at the core this is all they do.

    It makes sense to me to accept that if it looks like a duck, and it quacks like a duck, then it is a duck, for a lot (but not all) of important purposes.

    I.e. the only duck it walks and quacks like is autocomplete, it does not have agency or any other “emergent” features. For something to even have an emergent property, the system needs to have feedback from itself, which an LLM does not.










  • Single player games rarely need or demand “continued support” and player numbers aren’t indicative of that

    Sure maybe if the gaming industry didn’t constantly release buggy broken messes. But alas that’s not the world we live in and is very much a metric I care about to know whether or not a game is going to become abandonware or at least have community support if the developer won’t. These metrics allow that community or developer to understand if there is a player base which would benefit or a market to keep selling to. So yes they add value for players.

    Single player player numbers aren’t indicative about things getting a sequel, low player count games get sequels, high player count games don’t get sequels. It has no direct bearing.

    They very much are if the game is single player based. Acting as if demand is not a reason for games to get sequels or the budgets which come from player sales is not relevant is completely naive. Yes companies can run into financial hardships, get acquired and all manner of other circumstances that can lead to development being stopped whether there was an active player base or not. That’s not what these metrics represent and can give you an idea of what ip might get cut if a studio is acquired. They are useful and helpful, and I like to see those counts for my own understanding.

    If you want to check if there are guides you can just Google it, it’s a lot more useful to just Google it. Then you’ll actually know instead of guessing.

    Sure that used to work before SEO has killed search results, it’s quicker to check a player count on steam then to wade through garbage ai generated articles to find out if there is an active community following the game. It’s not a guess either if there are many people playing then there will be demand for content on YouTube or other platforms which means I can find guides.

    Knowing single player, player counts is really just for vague curiosity. There’s no real use to it.

    The only reason to hide it is to trick users to get abandon ware games or obscure how bad a game is doing. Keeping those stats up gives you valuable information, as I have pointed out. You are arguing in bad faith here and I honestly don’t know why unless you have some gatcha game on steam that you want to hide player stats on to hopefully drive some sales which is disingenuous.