Lemmy admins need to do whatever it is they can to handle CSAM if and when it arises. Users need to be understanding in this because as I’ve argued in other threads, CSAM itself poses a threat to the instance itself, as it poses a threat to the admins if they cannot clean up the material in a timely manner.
This is going to likely get weird for a bit, including but not limited to:
- instances going offline temporarily
- communities going offline temporarily
- image uploads being turned off
- sign ups being disabled
- applications and approval processes for sign ups
- ip or geoip limiting (not sure if this feature currently exists in lemmy, I suspect it doesn’t but this is merely a guess)
- totally SFW images being flagged as CSAM. Not advocating against use of ML / CV approaches, but historically they aren’t 100% and have gotten legit users incorrectly flagged. Example
I just want folks to know that major sites like reddit and facebook usually have (not very well) paid teams of people who’s sole job is to remove this material. Lemmy has overworked volunteers. Please have patience, and if you feel like arguing about why any of the methods I mentioned above are BS or have any questions reply to this message.
I’m not an admin, but I’m planning on being one and I’m sort of getting a feel for how the community responds to this sort of action. We don’t get to see it a lot in major social media sites because they aren’t as transparent (or as understaffed) as lemmy instances are.
This is an interesting idea. So if I’m understanding you correctly the workflow would be like this:
user uploads 4 images… 2 are flagged as CSAM.
user overwrites the flag on one image, asserting that “no, this isn’t CSAM”
in other sites, I’ve seen this work by the content remaining hidden except for the user until a team reviews it. If the team agrees, it’s allowed on the site. I think this is different from what you are describing though. I think you’re suggesting that the content stay online after the user overwrites the flagging, but then a mod will later double-check to see if the user was indeed trustworthy.
I only worry that an untrustworthy user will keep the content online until a mod reviews it, increasing the time the material is online and increasing the risk. It would be difficult to argue that “this was done in the interest of user satisfaction, even though it means that more CSAM got out”. Ultimately I don’t know how many people want to argue that to a judge.