X sucks, but Threads is even worse. 99% of everything I have ever seen on Threads is pure distilled engagement bait, and half the time expanding replies gets stuck loading. I wish I were exaggerating, but I’m not.
X sucks, but Threads is even worse. 99% of everything I have ever seen on Threads is pure distilled engagement bait, and half the time expanding replies gets stuck loading. I wish I were exaggerating, but I’m not.
It doesn’t progress your argument. You do not come across as the one arguing in good faith here, just so you know. You should think about why, if you are.
Keep up the good work
That’s great. The history communities on the other site were such great quality on average and I miss them. How do you have time to do all that?
Oh shit you mean like AskHistorians? Is there enough density now for that?
Personally, I’ve found that LLMs are best as discussion partners, to put it in the broadest terms possible. They do well for things you would use a human discussion partner for IRL.
For anything but criticism of something written, I find that the “spoken conversation” features are most useful. I use it a lot in the car during my commute.
For what it’s worth, in case this makes it sound like I’m a writer and my examples are only writing-related, I’m actually not a writer. I’m a software engineer. The first example can apply to writing an application or a proposal or whatever. Second is basically just therapy. Third is more abstract, and often about indirect self-improvement. There are plenty more things that are good for discussion partners, though. I’m sure anyone reading can come up with a few themselves.
May I ask how you’ve used LLMs so far? Because I hear that type of complaint from a lot of people who have tried to use them mainly to get answers to things, or maybe more broadly to replace their search engine, which is not what they’re best suited for, in my opinion.
Put simply, some states get more electors than other states to account for greater population, and each state decides how their electors are supposed to vote according to their statewide popular vote. Most states apply all of their electors to the winner of the popular vote in their state, while some apply them proportionally. Most do the former (“winner takes all”).
This leads to a discrepancy between the popular vote and the electoral vote, and it’s mathematically biased against states with higher populations. So, votes in the more populous states (which tend to vote Democrat) are worth “less” in the electoral college than those in less populous states, leading to Democrats winning the popular vote yet losing the actual election… which has happened in every election they’ve lost since Bush v. Gore, if I’m not mistaken. I’ll double check that and edit if I’m wrong.
Edit: Sorry, it did not happen for Bush v. Kerry, Bush won the popular vote in that one by less than 1%. However, in the other two (Bush v. Gore and Trump v. Clinton) the popular votes were actually won handily by Gore and Clinton, not by Bush or Trump.
Edit 2: This is also notably NOT made worse by gerrymandering, because the number of electors you get is equal to the combined number of senators and congressmen your state gets. Since all states apply their electors based on the popular vote result, it doesn’t matter what party alignments your congresspeople have, so gerrymandering plays no role here.
The mathematical bias comes from the fact that every state gets two senators no matter what the population is, and only your congressperson count is proportional to population, but both count toward your number of electors. So, less populous states have proportionally somewhat more “electors per capita” than states with higher populations.
I never said I thought training AI with the copyrighted work of others causes harm to others. If anything, I think training is analogous enough to human learning that it’s a gray area. However, I think there are different ethical concerns with AI training data than there are with piracy, and those concerns mostly arise from the profit being made from the models.
It’s not hypocritical if you believe that theft is wrong because it hurts another person, rather than wrong because you don’t deserve the thing or that it offers you an unfair advantage. Your argument leans heavily on the latter but mine the former.
That’s not quite true, though, is it?
$50 earned is yours to spend on anything. A $50 discount is offered by a vendor to entice you to spend enough of your money on them to make the discount worthwhile.
Pirates don’t pirate because they’re trying to save money on something they would have bought otherwise… typically they pirate because the amount they consume would bankrupt them if they purchased it through legitimate means, so they would never have been a paying customer in the first place.
So, if they wouldn’t have bought it anyway, and they’re not reselling it, did they really harm the vendor? Whether they pirated it or not, it wouldn’t affect the vendor either way.
That’s not really the same thing, in my opinion.
If you were able to pay for everything handily but pirated anyway, or if you resold pirated content, then yeah you have something similar to theft going on. But that’s not really the norm; those people are doing something bad irrespective of the piracy itself, aren’t they?
I’m not the above poster, but I really appreciate your argument. I think many people overcorrect in their minds about whether or not these models learn the way we do, and they miss the fact that they do behave very similarly to parts of our own systems. I’ve generally found that that overcorrection leads to bad arguments about copyright violation and ethical concerns.
However, your point is very interesting (and it is thankfully independent of that overcorrection). We’ve never had to worry about nonhuman personhood in any amount of seriousness in the past, so it’s strangely not obvious despite how obvious it should be: it’s okay to treat real people as special, even in the face of the arguable personhood of a sufficiently advanced machine. One good reason the machine can be treated differently is because we made it for us, like everything else we make.
I think there still is one related but dangling ethical question. What about machines that are made for us but we decide for whatever reason that they are equivalent in sentience and consciousness to humans?
A human has rights and can take what they’ve learned and make works inspired by it for money, or for someone else to make money through them. They are well within their rights to do so. A machine that we’ve decided is equivalent in sentience to a human, though… can that nonhuman person go take what it’s learned and make works inspired by it so that another person can make money through them?
If they SHOULDN’T be allowed to do that, then it’s notable that this scenario is only separated from what we have now by a gap in technology.
If they SHOULD be allowed to do that (which we could make a good argument for, since we’ve agreed that it is a sentient being) then the technology gap is again notable.
I don’t think the size of the technology gap actually matters here, logically; I think you can hand-wave it away pretty easily and apply it to our current situation rather than a future one. My guess, though, is that the size of the gap is of intuitive importance to anyone thinking about it (I’m no different) and most people would answer one way or the other depending on how big they perceive the technology gap to be.
Kind of, in that embedding anything from a site you can’t trust is inherently risky, but I’d say it’s not actually that bad, for two reasons:
You think the people you’re calling “NeoLibs” above are Reagan fans? Your criteria for neoliberal policies is “supports Ukraine and Israel at the same time” and “is aware of the current reality of Russian disinformation tactics”? Neither of those have anything to do with neoliberalism. I don’t think you know what “concern trolling” means, either.
I see, so you just use the term “NeoLib” to mean “people you disagree with” rather than “people with neoliberal political beliefs”
The fuck are you talking about
That hasn’t been my experience /shrug
I considered writing at least a post somewhere after reading your comment/adding my reply, but to be honest I don’t even know where it would be best received
I have used it several times for long-form writing as a critic, rather than as a “co-writer.” I write something myself, tell it to pretend to be the person who would be reading this thing (“Act as the beepbooper reviewing this beepboop…”), and ask for critical feedback. It usually has some actually great advice, and then I incorporate that advice into my thing. It ends up taking just as long as writing the thing normally, but materially far better than what I would have written without it.
I’ve also used it to generate an outline to use as a skeleton while writing. Its own writing is often really flat and written in a super passive voice, so it kinda sucks at doing the writing for you if you want it to be good. But it works in these ways as a useful collaborator and I think a lot of people miss that side of it.
It’s even worse on Threads, believe it or not.