You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Suggestion
I hope this is the right place to bring this up: there are awful things being sent in many channels seen on matrix.org, both from users directly on the server and federated users. It's multiple times a week, catching up with that with moderation after the fact doesn't really seem to do the situation justice.
Therefore, I suggest new users should not be allowed to send images or links with previews in public channels without some sort of moderator approval and/or some other automatic trust criteria. The same could apply to not being allowed to use than 2-3 mentions per minute, or something like that.
Some automatic criteria used by other chat services and forums to establish some trust are 1. user has been existing and seen with some activity for a couple of days, 2. user has been participating with text messages for a while without getting reported, 3. the home server itself has been known and has had users participating for a while without getting blacklisted, 4. the home server hasn't had a high ratio of users banned or reported in the past, 5. (for local users) the user IP address isn't on some known VPN or TOR router or other type of bot blacklist, 6. the user hasn't sent a suspiciously high amount of messages so far.
Some actions to take could be: 1. if trust level is too low, let the user know and refuse sending the image, or 2. put the image into some moderation queue but block further images until it has been reviewed by somebody, or 3. hide the image on the receiver's end with a clear warning that this image isn't trusted (probably not enough though, for the type of images being sometimes sent, but could be an additional mechanism when trust is a little higher but not quite established yet), ...
My apologies if such mechanisms already exist, but if they don't, they probably need some tuning. The situation right now seems pretty bad.
Suggestion
I hope this is the right place to bring this up: there are awful things being sent in many channels seen on matrix.org, both from users directly on the server and federated users. It's multiple times a week, catching up with that with moderation after the fact doesn't really seem to do the situation justice.
Therefore, I suggest new users should not be allowed to send images or links with previews in public channels without some sort of moderator approval and/or some other automatic trust criteria. The same could apply to not being allowed to use than 2-3 mentions per minute, or something like that.
Some automatic criteria used by other chat services and forums to establish some trust are 1. user has been existing and seen with some activity for a couple of days, 2. user has been participating with text messages for a while without getting reported, 3. the home server itself has been known and has had users participating for a while without getting blacklisted, 4. the home server hasn't had a high ratio of users banned or reported in the past, 5. (for local users) the user IP address isn't on some known VPN or TOR router or other type of bot blacklist, 6. the user hasn't sent a suspiciously high amount of messages so far.
Some actions to take could be: 1. if trust level is too low, let the user know and refuse sending the image, or 2. put the image into some moderation queue but block further images until it has been reviewed by somebody, or 3. hide the image on the receiver's end with a clear warning that this image isn't trusted (probably not enough though, for the type of images being sometimes sent, but could be an additional mechanism when trust is a little higher but not quite established yet), ...
My apologies if such mechanisms already exist, but if they don't, they probably need some tuning. The situation right now seems pretty bad.
(Moved over from matrix-org/matrix-spec-proposals#4212 where I filed it incorrectly by accident, sorry!)
The text was updated successfully, but these errors were encountered: