Skip to content
This repository has been archived by the owner on Mar 16, 2023. It is now read-only.

Load balancing #2

Closed
ghost opened this issue Dec 10, 2015 · 4 comments
Closed

Load balancing #2

ghost opened this issue Dec 10, 2015 · 4 comments
Labels

Comments

@ghost
Copy link

ghost commented Dec 10, 2015

Seeing this is pretty similar to the sticky-session module, I'd like to ask the same question like I did there (#41), is load balancing possible (through the XFF header or other means)?

@uqee
Copy link
Owner

uqee commented Dec 10, 2015

Hi, @Xaxatix!
The main goal of this module is to map multiple requests from one client to the same worker process no matter how busy it is. This idea contradicts any kind of a load balancer though.

@uqee uqee added the question label Dec 10, 2015
@ghost
Copy link
Author

ghost commented Dec 10, 2015

Well, the idea of the cluster module is to make better use of the system's resources, but that happens only on a single system. If you were to expand it to multiple, you have to add another layer of load balancing. But the problem comes from the fact that intra cluster sticky sessions don't play nice with reverse proxies/load balances, because the remoteAddress remains the same, so all traffic will always get routed to only one of the workers.

@uqee
Copy link
Owner

uqee commented Dec 10, 2015

Yep, I understood. Again, if your app works fine with load balancing than you simply don't need neither this module nor the sticky-session one, have a look at the detailed explanation of one of the use cases: why does socket.io need sticky sessions.

Although, if you still think you do, then it means you want to implement sticky (not load-based) routing on your proxy and sticky sessions on every machine inside (horizontal and vertical scaling at once). In that case, your master process has to start reading a socket to get XFF header to determine the corresponding worker, but the latter supposes it also starts reading from the beginning. I'm sure this problem is solvable, but if I were you, I'd consider other options.

@ghost
Copy link
Author

ghost commented Dec 11, 2015

Yes, but not using clustering on a multi core machine would mean that you'd have to run your app multiple times and on different ports, which is kinda cumbersome if you ask me. But if you run in cluster mode, sticky sessions is required. So in reality to achieve true horizontal (multiple servers) and vertical (cluster per server) scaling, you'd need a sticky reverse proxy for the horizontal scalability part, and then another sticky reverse proxy in the master cluster process for the vertical scaling. Don't know if it's worth the trouble and most importantly performance loss to go this far just for socket.io.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

1 participant