You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have deployed a project on Kubernetes, where the msggateway is set up as a headless service and the rest are using ClusterIP. I also deployed a msggateway-proxy to handle the routing. However, I've encountered an issue with the consistent hashing mechanism used to determine the unique identification of users for routing to the correct msggateway pod. It seems that the user uniqueness is incorrectly determined, leading to a scenario where online push messages fail to reach the correct user connections, instead connecting to other pods.
What did you expect to happen?
To elaborate, whenever an online push is attempted, the connection doesn't reach the intended user's online link but gets directed to a different pod. It appears that the hashing mechanism, which uses user IDs (uid) to fetch the msggateway host, is retrieving the wrong host. I have reviewed the documentation but didn't find relevant information on this issue. I'm also not very familiar with Helm, which might be part of the problem.
How can we reproduce it (as minimally and precisely as possible)?
The issue seems to reside in the consistency hash code within the proxy program. It's crucial that when adding hosts, they must exactly match the hosts as retrieved within Kubernetes, including both the URL and port. Ensuring this match will allow the same uid to retrieve the same websocket (ws) connection.
Anything else we need to know?
No response
version
```console
$ {name} version
# paste output here
```
Cloud provider
OS version
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
Install tools
The text was updated successfully, but these errors were encountered:
What happened?
I have deployed a project on Kubernetes, where the
msggateway
is set up as a headless service and the rest are usingClusterIP
. I also deployed amsggateway-proxy
to handle the routing. However, I've encountered an issue with the consistent hashing mechanism used to determine the unique identification of users for routing to the correctmsggateway
pod. It seems that the user uniqueness is incorrectly determined, leading to a scenario where online push messages fail to reach the correct user connections, instead connecting to other pods.What did you expect to happen?
To elaborate, whenever an online push is attempted, the connection doesn't reach the intended user's online link but gets directed to a different pod. It appears that the hashing mechanism, which uses user IDs (
uid
) to fetch themsggateway
host, is retrieving the wrong host. I have reviewed the documentation but didn't find relevant information on this issue. I'm also not very familiar with Helm, which might be part of the problem.How can we reproduce it (as minimally and precisely as possible)?
The issue seems to reside in the consistency hash code within the proxy program. It's crucial that when adding hosts, they must exactly match the hosts as retrieved within Kubernetes, including both the URL and port. Ensuring this match will allow the same
uid
to retrieve the same websocket (ws
) connection.Anything else we need to know?
No response
version
Cloud provider
OS version
Install tools
The text was updated successfully, but these errors were encountered: