Management UI is not responsible when server is under (heavy) load #13120
-
Community Support Policy
RabbitMQ version used4.0.5 Erlang version used27.2.x Operating system (distribution) usedUbuntu 24.04 How is RabbitMQ deployed?Generic binary package rabbitmq-diagnostics status outputSee https://www.rabbitmq.com/docs/cli to learn how to use rabbitmq-diagnostics
Logs from node 1 (with sensitive values edited out)See https://www.rabbitmq.com/docs/logging to learn how to collect logs
Logs from node 2 (if applicable, with sensitive values edited out)See https://www.rabbitmq.com/docs/logging to learn how to collect logs
Logs from node 3 (if applicable, with sensitive values edited out)See https://www.rabbitmq.com/docs/logging to learn how to collect logs
rabbitmq.confSee https://www.rabbitmq.com/docs/configure#config-location to learn how to find rabbitmq.conf file location
Steps to deploy RabbitMQ cluster. Steps to reproduce the behavior in question. advanced.configSee https://www.rabbitmq.com/docs/configure#config-location to learn how to find advanced.config file location
Application code# PASTE CODE HERE, BETWEEN BACKTICKS Kubernetes deployment file# Relevant parts of K8S deployment that demonstrate how RabbitMQ is deployed
# PASTE YAML HERE, BETWEEN BACKTICKS What problem are you trying to solve?We have not a small modular monolithic tenanted application which uses ~1200 queues. Each tenant have own instance which is deployed on two application servers. Each tenant have own virtual host and each instance opens two connections (one for publishing and one for consumers). Each consumer have own channel, it means app opens ~1200 channels on consumers connection. Rabbit is a cluster working on 3 nodes. For now we have 7 tenants installed... so Rabbit manage 14 connections and ~20k channels. When one tenant is under load (app processing around 3k messages per second) the UI management stays not responsible. PS |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 1 reply
-
@dario-l your cluster is deprived of CPU resources, and/or there are apps that invoke Every channel, even if it is not active otherwise, emits stats every 5 seconds. Which is why the limit on the maximum number of channels per connection has been going down gradually every couple of years. Shared clusters will inevitably run into the noisy neighbor problem (someone else's apps hogging most of CPU or other resources, including network bandwidth). All the tuning imaginable can only delay the inevitable. Which is why VMware products provision separate clusters by default and only leave the shared cluster option as an opt-in feature recommended only for integration testing and such. It's easy to see that RabbitMQ-as-a-Service providers severely limit the number of connections |
Beta Was this translation helpful? Give feedback.
-
Also, the Virtual hosts guide explicitly states that they only offer logical separation and offer nothing in terms of physical resource separation. So, the pool of resources (cores, disk space, disk I/O, network bandwidth) in a cluster must be adequate for all tenants or you have no choice but to use separate clusters. |
Beta Was this translation helpful? Give feedback.
-
Thanks for the clarification. As for resources, we are aware of all that.
This sentence I think is most important for us. Definitely we must refactor to decrease number of channels. Our try is to make only one channel per module. For now we have about 60 modules. This will reduce channels drastically. |
Beta Was this translation helpful? Give feedback.
@dario-l your cluster is deprived of CPU resources, and/or there are apps that invoke
GET /api/queues
,GET /api/channels
and such without pagination.Every channel, even if it is not active otherwise, emits stats every 5 seconds. Which is why the limit on the maximum number of channels per connection has been going down gradually every couple of years.
Shared clusters will inevitably run into the noisy neighbor problem (someone else's apps hogging most of CPU or other resources, including network bandwidth). All the tuning imaginable can only delay the inevitable.
Which is why VMware products provision separate clusters by default and only leave the shared cluster option as an opt-in feat…