You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current implementation of the WebSocket transport doesn't provide enough options to protect against DoS/DDoS by users sending large payloads over multiple WebSocket connections, which will then be processed in parallel.
The payload size limit (imposed by gorilla/websocket?) seems to be ~100 MB, and with less than 100 connections spamming such large single-frame payloads in parallel (with EnableCompression: true it's just a few kB sent over the network, and you can compose valid payloads that won't cause a disconnect, e.g. a ~100 MB long, easily compressible operationName), one may start noticing performance degradation. With 100-200+ connections I made one of my servers unresponsive.
Now, with InitFunc in combination with OnCloseFunc, we could already count concurrent connections per IP or some other public client identifier, and reject further upgrades in CheckOrigin if too many are opened to partially mitigate this.
However, unless I'm missing something, the InitFunc doesn't let us access the connection to set a safe, maximum read limit, like:
InitFunc: func(ctx context.Context, initPayload transport.InitPayload, conn*wsConnection) (context.Context, *transport.InitPayload, error) {
conn.SetReadLimit(2500) // https://pkg.go.dev/github.com/gorilla/[email protected]#Conn.SetReadLimit// but conn isn't passed, also not in context
}
This would make such DoS/DDoS attacks unfeasible, since Gorilla would safely abort reading large payloads if a single frame or multiple frames would accumulatively exceed the limit, close the connection, which would then force the user to open a new one, where they'd be faced with our custom connection upgrade rate limiter.
If something speaks against passing the whole connection, I think it would be good to set a configurable read limit with some default that is much less than 100 MB - 1 MB perhaps - in the websocket init func, given that GraphQL payloads sent by clients via the WebSocket transport should always be < 1 MB.
The current implementation of the WebSocket transport doesn't provide enough options to protect against DoS/DDoS by users sending large payloads over multiple WebSocket connections, which will then be processed in parallel.
The payload size limit (imposed by gorilla/websocket?) seems to be ~100 MB, and with less than 100 connections spamming such large single-frame payloads in parallel (with
EnableCompression: true
it's just a few kB sent over the network, and you can compose valid payloads that won't cause a disconnect, e.g. a ~100 MB long, easily compressibleoperationName
), one may start noticing performance degradation. With 100-200+ connections I made one of my servers unresponsive.Now, with
InitFunc
in combination withOnCloseFunc
, we could already count concurrent connections per IP or some other public client identifier, and reject further upgrades inCheckOrigin
if too many are opened to partially mitigate this.However, unless I'm missing something, the
InitFunc
doesn't let us access the connection to set a safe, maximum read limit, like:This would make such DoS/DDoS attacks unfeasible, since Gorilla would safely abort reading large payloads if a single frame or multiple frames would accumulatively exceed the limit, close the connection, which would then force the user to open a new one, where they'd be faced with our custom connection upgrade rate limiter.
If something speaks against passing the whole connection, I think it would be good to set a configurable read limit with some default that is much less than 100 MB - 1 MB perhaps - in the websocket init func, given that GraphQL payloads sent by clients via the WebSocket transport should always be < 1 MB.
Obviously one could create a custom Transport, but this seems like a basic thing that could be added with 1 line of code without any downside.
Any thoughts?
The text was updated successfully, but these errors were encountered: