• 0 Posts
  • 22 Comments
Joined 3 years ago
cake
Cake day: April 1st, 2022

help-circle




  • Haha they thought it was too easy and were proven wrong!

    Honestly, if a place is obscure enough, even smaller barriers of entry help, like forums that don’t let you post on important boards until you build a reputation. There’s only so much effort an adversary is willing to put in, and if there isn’t a financial incentive or huge political incentive, that barrier could be low.



  • (edit: I accidentally a word and didn’t realize you wrote ‘auto-report instead of deleting them’. Read the following with a grain of salt)

    I’ve played (briefly) with automated moderation bots on forums, and the main thing stopping me from going much past known-bad profiles (e.g. visited the site from a literal spamlist) is not just false positives but malicious abuse. I wanted to add a feature which would censor an image immediately with a warning if it was reported for (say) porn, shock imagery or other extreme content, but if a user noticed this, they could falsely report content to censor it until a staff member dismisses the report.

    Could an external brigade of trolls get legitimate users banned or their posts hidden just by gaming your bot? That’s a serious issue which could make real users have their work deleted, and in my experience, users can take that very personally.



  • Also consider not having an economy where our jobs dominate our lives.

    There’s plenty of studies, videos and anecdotes discussing how despite technology becoming more and more efficient, we work more hours a day in the Industrial era. Most of the older culture we consider traditional didn’t come from the media industries we see today, they came from families and communities having enough time to spend together that they can create and share art and other media relevant to their own lives.


  • (although given the decentralised framework of the fedi, I’m not sure how that could even happen in the traditional sense).

    It’s possible to dominate and softly-control a decentralized network, because it can centralize. So long as the average user doesn’t really care about those ideals (perhaps they’re only here for certain content, or to avoid a certain drawback of another platform) then they may not bother to decentralize. So long as a very popular instance doesn’t do anything so bad that regular users on their instance will leave at once and lose critical mass, they can gradually enshittify and enforce conditions on instances connecting to them, or even just defederate altogether and become a central platform.

    For a relevant but obviously different case study: before the reddit API exodus, there was a troll who would post shock images every day to try and attack lemmy.ml. Whenever an account was banned, they would simply register a new one on an instance which didn’t require accounts to be approved, and continue trolling with barely any effort. Because of this, lemmy.ml began to defederate with any instance which didn’t have a registration approval system, telling them they would be re-added once a signup test was enabled.

    lemmy.ml was one of the core instances, only rivaled in size by lemmygrad.ml and wolfballs (wolfballs was defederated by most other instance, and lemmygrad.ml by many other big instances), so if an instance wasn’t able to federate with lemmy.ml, at the time, it would miss out on most of the activity. So, lemmy.ml effectively pressured a policy change on other instances, albeit an overall beneficial change to make trolling harder, and in their own self-defence. One could imagine how a malevolent large instance could do something similar, if they grew to dominate the network. And this is the kind of EEE fears many here have over Threads and other attempts at moving large (anti-)social networks into the Fediverse.