• 0 Posts
  • 22 Comments
Joined 1 year ago
cake
Cake day: July 4th, 2023

help-circle







  • It’s doublespeak, a la 1984. In this case many of the words used are inversions of their usual meaning.

    I’ll provide some translations:

    patriots -> seditionists

    biblical -> cloaked in the authority of God – which here must be explicitly stated because their actions are opposed to any God of justice

    open boarders -> following legal and ethical principles (remember, these people oppose ethical principles)

    peacemakers -> those who use violence, or the threat of violence, to obstruct peace. See also, terrorists

    besieged by dark forces of evil -> opposed by morality, ethics, or the law. The addition of “on all sides” changes the ‘or’ to ‘and’

    As a bonus, “globalists” is antisemitism. There’s an old canard that justice for minorities = communism = Jewish conspiracy. This is particularly common among nazis. The convoy organizer is signaling that their group is accepting of these ideas



  • I think I understand how it works.

    Remember that LLMs are glorified auto-complete. They just spit out the most likely word that follows the previous words (literally just like how your phone keyboard suggestions work, just with a lot more computation).

    They have a limit to how far back they can remember. For ChatGPT 3.5 I believe it’s 24,000 tokens.

    So it tries to follow instruction and spits out “poem poem poem” until all the data is just the word “poem”, then it doesn’t have enough memory to remember its instructions.

    “Poem poem poem” is useless data so it doesn’t have anything to go off of, so it just outputs words that go together.

    LLMs don’t record data in the same way a computer file is stored, but absent other information may “remember” that the most likely word to follow the previous word is something that it has seen before, i.e. its training data. It is somewhat surprising that it is not just junk. It seems to be real text (such as bible verses).

    If I am correct then I’m surprised OpenAI didn’t fix if. I would think they could make it so in the event the LLM is running out of memory it would keep the input and simply abort operation, or at least drop the beginning of its output.









  • I did back in college. Mobile computing was just becoming a thing but I was way too hipster (and poor) for a PDA or one of those newfangled “smart phone” devices.

    I hacked together a wifi SMS texting gadget following a tutorial on Hack a Day. It ran Debian with Linux kernel 2.6 and was so fun to tinker with.

    It had 32 MB of RAM but X used 11 MB of that so you couldn’t really do anything in graphical mode anyway. A shell running GNU screen however only took 4 MB so it was much more usable from the terminal.

    I eventually figured out a way to pipe images and even (non accelerated, since it didn’t have a GPU) video from mplayer to write directly into the framebuffer. It was a real bear to get it translated into landscape mode.

    I Am Legend in 144p never looked so good.

    Even with the terrible specs, I have never loved a phone so much as I loved that little computer