• 1 Post
  • 1.37K Comments
Joined 2 years ago
cake
Cake day: June 11th, 2023

help-circle







  • It’s a long history lesson. But the gist is that IBM made an architecture that allowed for modular LEGO style construction of computers. They were assholes and tried to make it lock down by keeping software secret and proprietary, but it was so popular that everyone else copied it and IBM/PC clones were born. Then the architecture became the standard, and everyone could make components for a PC with (more or less) assurance that any component made would be compatible and fit into (almost) any other computer.

    Phones, on the other hand were born out of the necessity of being the smallest and most portable device possible. This meant bespoke solutions. The people who were chasing that format chose an architecture, ARM, that at the time required everything to be on a single chip. Memory, storage, CPU, CMOS, everything has to be on the chip. Which means exchanging parts is not possible. System on chip became the smart phone standard. Now, technically ARM doesn’t have to always be SOC. But it means two things, first is that every phone model is an unique and bespoke production that will never exist again once out of print. Second, it is a Titanic task to reverse engineer certain parts of it, firmware for sensor input is always unique, for example.

    This means that FOSS is at a disadvantage. To make free open software for a phone means that, either a manufacturer is magnanimous and gives you all the firmware, or after a major effort to reverse engineer lots of pieces of software, it will be useless for the next model of phone. You either make your own open standard phone, which is a several billion dollar r&d endeavor. Or you’re constantly shooting at a fast moving target.

    No one has created an open standard that allows small component manufacturing of mutually interchangeable parts for phones. Risc-v is close but not yet terribly financially viable.


  • When it comes to this type of things there are two camps or scenarios:

    1. You give yourself a label and define it through your actions. “I’m Emo, because I said so”. This is how many subcultures name themselves, social groups of the same interest give themselves a descriptor and run with it. Usually around a shared taste in music genre.

    A. You do you spontaneously, then other people give you a label to describe you. Then groups usually adopt the name and run with it. A lot of labels come from this scenario as well.

    The point is that both scenarios also interact and dialogue. Some people redefine the label through their actions, usually from terms that started as derogatory insults and are then appropriated by the in group then redefined as an identity. Sometimes self chosen labels are then reinterpreted by the public and resignified by the out group.

    It’s all very fluid and always changing. Don’t fret too much over it. Whatever label you define yourself or get assigned to today might as well mean nothing in a couple of years because culture is alive and constantly moving.




  • I deep dived into AI research when the bubble first started with chatgpt 3.5. It turns out, most AI researchers are philosophers. Because thus far, there was very little tech wise elements to discuss. Neural networks and machine learning were very basic and a lot of proposals were theoretical. Generative AI as LLMs and image generators were philosophical proposals before real technological prototypes were built. A lot of it comes from epistemology analysis mixed in with neuroscience and devops. It’s a relatively new trend that the wallstreet techbros have inserted themselves and dominated the space.


  • It’s obvious how you haven’t even touched a Samsung phone in the past 10 years and are just repeating misinformation. Carrier phones with preinstalled bloatware is a thing, but Samsung mostly did it in the heyday if Facebook and Twitter integration with data plans in the US circa 2015. Newer phones and international versions have never had preinstalled social media apps, let alone installed at system level. This was a widespread issue at the time with all phones, from Motorola to ASUS, and yes, even Apple. Not a Samsung exclusive issue.

    Currently, even Samsung applications can be uninstalled. There’s ads, on the Galaxy store, where you are supposed to have ads. They are no more intrusive than looking at recommended apps on the Play store or the AppStore.

    There’s one bit of dark pattern left, and it is after major upgrades, Samsung will show a notification suggesting to install recommended apps. But you can touch “don’t show this again” and it goes away forever. I’ve never seen an ad on my s24 phone ever.

    So, my suggestion is to not blindly trust everything you hear on the internet. No matter how geeky and knowledgeable the people may seem. Just find variety and diversity of POVs to form a more complex and nuanced opinion, even seek personal experience. Not just stay with a single person’s biased opinion. And definitely don’t parrot loudly something that you have no first hand experience with.



  • It’s mostly the TV. The input difference between wired and BT should be very small, though the switch is not optimized for wired controllers. The variability of TV response times on the other hand it massive in comparison. Specially modern TVs with heavy post processing who think they are clever trying to interpolate frames or other shit like bad HDR implementations, etc. HDMI DRM also adds latency.

    All that causes most TVs to be subpar for gaming. I still game on TV, mostly cozy games. But I accept that nothing competitive will come out of gaming on a TV.



  • Which they can sell to advertisers LLM and AI companies.

    It’s not talked about too much, because it is not in the best interest of the stockholders. But AI as it was popularized by openAI and both images and text generators already reached a boundary of data availability. There’s no more human made data. They are now resorting to synthetic data, which is to make one first generation LLM model create tons of data to train newer or more tailored weighs models. With the issue that this new models develop problems from inbreeding of the data. Training models on other genAI products poisons the models and corrupts their generative power in just a few generations. This is why genAI images are increasingly turning yellow, the same reason newer models are more fragile and hallucinate or go psychotic more easily than old models. So, the AI companies need new sources of human made data to mix in with the synthetic data.

    The main problem is that we ran out, there’s no more data made by humans to train AI with. Humans don’t create new data fast enough to train all the new models with the new doodads and features the AI companies want to sell. So now these companies will pay anything just to get their hands on new fresh stuff. These is why any app in the planet will now pivot to do anything they can to get chats going. It’s a new source of data to sell to data brokers.