I wonder what kind of of csam detection they have. If they’re only relying on hash matching, they’ve gonna get fucked from novel genai csam. This is why stuff like the fedi-safety exists which they could use as well
“Having manual ways to remove csam” means almost nothing. All of lemmy has a “manual way to remove csam”. “closely working with NCMEC” can mean they just use the cloudflare mechanism which is just hash matching. Point is, it’s very easy for a malicious actor to upload csam and then report them to patreon for it, without ever reporting it to them.
Can’t you always attempt uploads until they bypass arbitrary filters and then report-snipe on that?
How would a content-based filter prevent this if the malicious actor simply needs to upload correspondingly more images?
I think the sad reality is that the only escape here is scale. Once you have been hit by this attack and been cleared by the 3rd parties, you’d have precedent for when this happens again and should hopefully be placed in a special bin for better treatment.
Scale means you will be fire-tested, and are more likely to receive sane treatment instead of the ai-support special.
I wonder what kind of of csam detection they have. If they’re only relying on hash matching, they’ve gonna get fucked from novel genai csam. This is why stuff like the fedi-safety exists which they could use as well
It seems to be unspecified “automated and manual” systems plus reports from the NCMEC https://lemm.ee/post/65739566/20890503 , which they process quite fast https://lemm.ee/post/65739566/20890630 .
Sorry your links don’t work it seems. Maybe those posts were deleted.
In any case, if their “automated” is just hash matching, it’s just not going to cut it.
Just guessing what the links may have been…
Possibly my post on lemmy.world, removed due to breaking rule 2, “Only tech related news or articles”
I’ll copy paste my comment from there:
In the reply to Patreon they mentioned having some automated and manual ways of removing CSAM, plus “closely working with NCMEC”, but I have no idea what that means.
And these statistics of resolved reports: https://www.missingkids.org/content/dam/missingkids/pdfs/cybertiplinedata2024/2024-notifications-by-ncmec-resulting-content-removal.pdf
Total number of reports of 128 resolved on average in 1.91 days. Less than half the time spent by Amazon, Google and Microsoft (for Bing).
The other link might have been to this comment:
“Having manual ways to remove csam” means almost nothing. All of lemmy has a “manual way to remove csam”. “closely working with NCMEC” can mean they just use the cloudflare mechanism which is just hash matching. Point is, it’s very easy for a malicious actor to upload csam and then report them to patreon for it, without ever reporting it to them.
Can’t you always attempt uploads until they bypass arbitrary filters and then report-snipe on that?
How would a content-based filter prevent this if the malicious actor simply needs to upload correspondingly more images?
I think the sad reality is that the only escape here is scale. Once you have been hit by this attack and been cleared by the 3rd parties, you’d have precedent for when this happens again and should hopefully be placed in a special bin for better treatment.
Scale means you will be fire-tested, and are more likely to receive sane treatment instead of the ai-support special.
There can be warning about someone getting caught with multiple failed attempts