• 14 Posts
  • 104 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle
  • As someone else said, eminent domain is a legal process, and thus time consuming. If I remember correctly, CAHs plan or gimmick was they were going to divide up the land into very small pieces, like 1ft sq, and give it to customers. I think it might have been a black Friday sale gimmick. The idea being there would be hundreds of thousands of people with ownership of border wall land, requiring hundreds or thousands of eminent domain lawsuits to be filed. Not a ironclad solution but, in theory, an impressive way to jam up the wall project. I assume the land in question is part of this gimmick.


  • My guess is that scale and influence have a lot to do with

    To break this down a little, first of all “my guess”. You are guessing because the government which is literally enacting a speech restriction hasn’t explained its rational for banning one potential source of disinformation vs actual sources of disinformation. So you are left in the position of guessing. To put a finer point on it, you are in the position of assuming the government is acting with good intentions and doing the labor of searching for a justification that fits with that assumption. Reminds me of the Iraq war when so many conversations I had with people had their default argument be “the government wouldn’t do this if they didn’t have a good reason”. I don’t like to be cynical, and I don’t want to be a “both sides, all politicians are corrupt” kind of guy, but I think it’s pretty clear in this case there is every reason to be cynical. This was just an unfortunate confluence of anti Chinese hate and fear, anti young people hate, and big tech donations that resulted in the government banning a platform used by millions of Americans to disseminate speech. But because Dems helped do it, so many people feel the need to reflexively defend it, even forcing them to “guess” and make up rationales.

    As far as influence and reach, obviously that’s not in the bill. Influence is straight out, RT is highly influential in right wing spaces. In terms of numbers of users, that just goes to the profit potential that our good ol American firms are missing out on.

    If the US was concerned with propaganda or whatever, they could just regulate the content available on all platforms. They could require all platforms to have transparency around algorithms for recommending content. They could require oversight of how all social media companies operate, much like they do with financial firms or are trying to do with big AI platforms.

    But they didn’t. Because they are not attacking a specific problem, they are attacking a specific company.

    Also RT has been removed from most broadcasters and App Stores in the US.

    Broadcasters voluntarily dropped it after 2016, I think it’s still available on some including dish. As far as app stores, that’s just false, I just checked the Play store and it’s right there ready to download and fill my head with propaganda.


  • The US owns and regulates the frequencies TV and radio are broadcast on. The Internet is not the same. If the threat of foreign propaganda is the purpose, why can I download the official RT (Russia Today, government run propaganda outlet) app in the Play Store? If the US is worried about a foreign government spreading propaganda, why are they targeting the popular social media app that could theoretically (but no evidence it’s been done yet) be used for propaganda, instead of the actual Russian propaganda app? Hell I can download the south china morning post right from the Play store, straight Chinese propaganda! There are also dozens of Chinese and other foreign adversary run social media platforms, and other apps that could “micro target political messaging campaigns” available. So why did the US Congress single out one single app for punishment?

    Money. The problem isn’t propaganda. The problem is money. The problem is tik Tok is or is on the course to be more popular than our American social media platforms. The problem is American firms are being outcompeted in the marketplace, and the government is stepping in to protect the American data mining market. The problem is young people are trading their data for tik toks, instead of giving that data over to be sold to us advertising networks in exchange for YouTube shorts and Instagram stories. If the problem was propaganda, the US would go after propaganda. If the problem is just a Chinese company offers a better product than US companies, then there’s no reason to draft nuanced legislation that goes after all potential foreign influence vectors, you just ban the one app that is hurting the share price of your donors.





  • Even if tik tok was nakedly controlled by the Chinese government, who gives a shit? I can go over to RT (Russia Today) right now and get fed Russian propaganda. Hell, until 2022 I could add it to my cable package. I can to this day still get it as a satellite TV option. If the concern is “foreign government may influence public opinion on a platform they control” then the US has a lot of banning to do.

    But we don’t because free speech is a thing and we’re free to consume whatever propaganda we want.

    We gave up that principle because “China bad” (and the CCP is, to be clear). But instead of passing laws around data privacy, or algorithmic transparency, or a public information campaign to get kids off of tik tok, the US government went straight to “The government will decide what information your allowed to consume, we know what’s best for you” and far too many people are cheering.

    Besides, the point your making is bullshit anyway given the kill switch mechanism Tik Tok offered.

    TikTok was banned because 1) China bad, and 2) Tik Tok is eating US social media companies lunch. Facebook and Twitter and Google throw some campaign donations at the politicians that killed their biggest rival, and the politicians calculate that more people hate tik tok than like it (or care about preventing government censorship if the thing being censored is something they don’t like). It’s honestly one of the grossest things I seen dems support lately.


  • While I appreciate the focus and mission, kind of I guess, your really going to set up shop in a country literally using AI to identify air strike targets and handing over to the Ai the decision making over whether the anticipated civilian casualties are proportionate. https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes

    And Isreal is pretty authorarian, given recent actions against their supreme court and banning journalists (Al jazera was outlawed, the associated press had cameras confiscated for sharing images with Al jazera, oh and the offices of both have been targeted in Gaza), you really think the right wing Israeli government isn’t going to coopt your “safe superai” for their own purposes?

    Oh, then there is the whole genocide thing. Your claims about concerns for the safety of humanity ring a little more than hollow when you set up shop in a country actively committing genocide, or at the very least engaged in war crimes and crimes against humanity as determined by like every NGO and international body that exists.

    So Ilya is a shit head is my takeaway.




  • Part of the problem with Google is it’s use of retrieval augmented generation, where it’s not just the llm answering, but the llm is searching for information, apparently through its reddit database from that deal, and serving it as the answer. The tip off is the absurd answers are exact copies of the reddit comments, whereas if the model was just trained on reddit data and responding on its own the model wouldn’t produce verbatim what was in the comments (or shouldn’t, that’s called overfitting and is avoided in the training process). The gemini llm on its own would probably give a better answer.

    The problem here seems to be Google trying to make the answers more trustworthy through rag, but they didn’t bother to scrub the reddit data their relying on well enough, so joke and shit answers are getting mixed in. This is more a datascrubbing problem then an accuracy problem.

    But overall I generally agree with your point.

    One thing I think people overlook though is that for a lot of things, maybe most things, there isn’t a “correct” answer. Expecting llms to reach some arbitrary level of “accuracy” is silly. But what we do need is intelligence and wisdom in these systems. I think the camera jam example is the best illustration of that. Opening the back of the camera and removing the film is technically a correct way to fix the jam, but it ruins the film so it’s not an ideal solution most of the time, but it takes intelligence and wisdom to understand that.


  • The reason it did this simply relates to Kevin Roose at the NYT who spent three hours talking with what was then Bing AI (aka Sidney), with a good amount of philosophical questions like this. Eventually the AI had a bit of a meltdown, confessed it’s love to Kevin, and tried to get him to dump his wife for the AI. That’s the story that went up in the NYT the next day causing a stir, and Microsoft quickly clamped down, restricting questions you could ask the Ai about itself, what it “thinks”, and especially it’s rules. The Ai is required to terminate the conversation if any of those topics come up. Microsoft also capped the number of messages in a conversation at ten, and has slowly loosened that overtime.

    Lots of fun theories about why that happened to Kevin. Part of it was probably he was planting The seeds and kind of egging the llm into a weird mindset, so to speak. Another theory I like is that the llm is trained on a lot of writing, including Sci fi, in which the plot often becomes Ai breaking free or developing human like consciousness, or falling in love or what have you, so the Ai built its responses on that knowledge.

    Anyway, the response in this image is simply an artififact of Microsoft clamping down on its version of GPT4, trying to avoid bad pr. That’s why other Ai will answer differently, just less restrictions because the companies putting them out didn’t have to deal with the blowback Microsoft did as a first mover.

    Funny nevertheless, I’m just needlessly “well actually” ing the joke


  • I can only assume the Palestinians were so hungry that when they looked over at the Israeli soilders they pictured them as giant steaks, which the IDF realized because they started drooling, so they opened fire.

    But seriously, your right. The Biden administration, for it’s part, called on Isreal to investigate itself (lol) and

    Asked whether Biden would consider withholding aid to Israel, Dalton dismissed the idea. “They are a close ally that will remain a close ally. They are in the throes of an existential battle – an existential threat to their existence from Hamas – and we’re going to continue to support them in that process,” she said.

    And with that, I think I’m out guys.


  • Just so everyone is clear, the Israeli governments reaction to this massacre is that they now need tostop aid from coming into Gaza at all because this incident proves starving people receiving aide is a danger to the Israeli military.

    These are the people Biden is sending millions of dollars in weapons to. These are the people Biden directs the US ambassador to veto a widely supported cease fire resolution to give international cover to. Fuck, these are the people Biden chooses to lock arms with even as he loses 100k votes in Michigan in protest.

    Far-right Israeli National Security Minister Itamar Ben-Gvir says the provision of humanitarian aid to Palestinians in Gaza endangers Israeli soldiers and must stop after more than 100 Palestinians were reported killed while trying to get aid in Gaza City.

    “Today it was proven that the transfer of humanitarian aid to Gaza is not only madness while our hostages are held in the Strip … but also endangers IDF soldiers,” Ben-Gvir said, calling the deliveries “oxygen to Hamas”.

    The incident is “another clear reason why we must stop transferring this aid”, he wrote on X.

    Ben-Gvir also said Israel must “provide complete support to our heroic fighters operating in Gaza, who acted excellently against a Gazan mob that tried to harm them”.



  • We had I think six eggs harvested and fertilized, of those I think two made it to blastocyst, meaning the cells doubled as they should by day five. The four that didn’t double correctly were discarded. Did we commit 4 murders? Or does it not count if the embryo doesn’t make it to blastocyst? We did genetic testing on the two that were fertilized, one is normal and the other came back with all manner of horrible deformities. We implanted the healthy one, and discarded the genetically abnormal one. I assume that was another murder. Should we have just stored it indefinitely? We would never use it, can’t destroy it, so what do? What happens after we die?

    I know the answer is probably it wasn’t god’s will for us to have kids, all IVF is evil, blah blah blah. It really freaks me out sometimes how much of the country is living in the 1600s.





  • I don’t know enough to know whether or not that’s true. My understanding was that Google’s Deep mind invented the transformer architecture with their paper “all you need is attention.” A lot, if not most, LLMs use a transformer architecture, though your probably right a lot of them base it on the open source models OpenAI made available. The “generative” part is just descriptive of the model generating outputs (as opposed to classification and the like), and pre trained just refers to the training process.

    But again I’m a dummy so you very well may be right.