Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run images through an inappropriate content filter #11795

Open
makyen opened this issue Jun 29, 2024 · 3 comments
Open

Run images through an inappropriate content filter #11795

makyen opened this issue Jun 29, 2024 · 3 comments
Labels
area: spamchecks Detections or the process of testing posts. (No space in the label, is because of Hacktoberfest) status: confirmed Confirmed as something that needs working on. type: feature request Shinies.

Comments

@makyen
Copy link
Contributor

makyen commented Jun 29, 2024

Given that we see inappropriate content (porn, CSAM, etc.) posted as images, it would be good to run those images through an inappropriate content filter in order to detect such things. My expectation would be that an external service would be used, but there may also be packages available which process the image locally. Substantial investigation as to what's available, and what we might get free access to, would need to be done. We should at least investigate if there was a way for us to use the same service that SE is using.

@makyen makyen added type: feature request Shinies. area: spamchecks Detections or the process of testing posts. (No space in the label, is because of Hacktoberfest) status: confirmed Confirmed as something that needs working on. labels Jun 29, 2024
@teward
Copy link
Member

teward commented Jun 29, 2024

Note we have a legal issue here - simply having access to such content to generate hashes, etc. violates several US and other jurisdiction laws. Any system processing this would need an MOU from corresponding legal jurisdictions that we are operating such filtration and hash summing in accordance with legally defined rules and limits, thereby not violating federal CSAM or content laws. Additonally, any system running these has to be in a Terms of Use where this activity is permitted.

We need to deeply assess the legal security around this esp. if we intend to implement #11794

@makyen
Copy link
Contributor Author

makyen commented Jun 29, 2024

Hmmm... Legal issues were not ones which I had been considering, but I agree they could be considerable.

Would we have a legal issue if the only things we touch are the URL that's in the post and the response from the inappropriate image content filter that's performed by some third party (i.e., we pass the URL that we find in the post, not the image, to the third party; then the third party service is responsible for fetching the image from the URL)?

@teward
Copy link
Member

teward commented Jun 29, 2024

(broken GH is broken, they deleted my message)

I believe that would resolve the issue, yes. I think most of the content filtering services have an MOU with law enforcement and work with law enforcement and notify on CSAM and stuff.

This would prevent local processing though because we then rely on a third party service.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area: spamchecks Detections or the process of testing posts. (No space in the label, is because of Hacktoberfest) status: confirmed Confirmed as something that needs working on. type: feature request Shinies.
Development

No branches or pull requests

2 participants