• 0 Posts
  • 223 Comments
Joined 2 years ago
cake
Cake day: July 4th, 2023

help-circle









  • Once you’re in place and have access to tax dollars instead of campaign funds, you get them spent on things by putting that community infrastructure into place.

    One problem with that is they’re running for a collective position. In their campaign, they’re captain of their own ship. A congresswoman is more a crewmate. Greater power, but more divided.

    One thing they would be able to do, though, is never stop campaigning. They’ll be less active, as a congresswoman, but the campaign pivots to next election (which, let’s be honest, that’s already the norm, there is no break from politics anymore), and that next election can be focused on on-the-ground community campaigning the entire time.

    So, no, you won’t see the same level of activism with tax dollars, they don’t lead, but she and others that follow her can do good consistently with their position through campaign funds and fight for good in their government while that happens.



  • Nah that means you can ask an LLM “is this real” and get a correct answer.

    That defeats the point of a bunch of kinds of material.

    Deepfakes, for instance. International espionage, propaganda, companies who want “real people”.

    A simple is_ai checkbox of any kind is undesirable, but those sources will end back up in every LLM, even one that was behaving and flagging its output.

    You’d need every LLM to do this, and there’s open source models, there’s foreign ones. And as has already been proven, you can’t rely on an LLM detecting a generated product without it.

    The correct way to do it would be to instead organize a not-ai certification for real content. But that would severely limit training data. It could happen once quantity of data isn’t the be-all end-all for a model, but I dunno when when or if that’ll be the case.


  • No, because there’s still no case.

    Law textbooks that taught an imaginary case would just get a lot of lawyers in trouble, because someone eventually will wanna read the whole case and will try to pull the actual case, not just a reference. Those cases aren’t susceptible to this because they’re essentially a historical record. It’s like the difference between a scan of the declaration of independence and a high school history book describing it. Only one of those things could be bullshitted by an LLM.

    Also applies to law schools. People do reference back to cases all the time, there’s an opposing lawyer, after all, who’d love a slam dunk win of “your honor, my opponent is actually full of shit and making everything up”. Any lawyer trained on imaginary material as if it were reality will just fail repeatedly.

    LLMs can deceive lawyers who don’t verify their work. Lawyers are in fact required to verify their work, and the ones that have been caught using LLMs are quite literally not doing their job. If that wasn’t the case, lawyers would make up cases themselves, they don’t need an LLM for that, but it doesn’t happen because it doesn’t work.