-
Notifications
You must be signed in to change notification settings - Fork 129
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Document load under fleet #2003
Comments
And some api times of the server under load (IIRC 75k cores) but with various stages of 'distress'
5 request_task / s high:
9 request_task / s too much:
7 request_task / s if we go with the above timing for request_task, we can handle 1000 / 149 ~ 7 of them per second, so that's consistent with the numbers we have here (without timeouts at various rate for the profile 2 and 3). |
no, I don't know why that happens. Admittedly, I don't quite know how the frameworks around our python code do things, if they would only allow one concurrent access to the same endpoint, it would be obvious. |
fishtest/server/fishtest/rundb.py Lines 681 to 683 in b204b37
fishtest/server/fishtest/rundb.py Lines 723 to 732 in b204b37
It should queue only 4 requests and drop the other until a slot in the semaphore is released... |
The entire thing is basically single threaded from the start because of how python does thing. I think Pasquale has worked around that by configuring the nginx so that certain endpoints are forwarded to different python processes (I think we got 2 running)? So all requests to the same endpoint are handled sequentially without any concurrency iirc |
fishtest/server/production.ini Lines 20 to 36 in b204b37
Each of the 3 instances should have 4 threads, so the semaphore should never overtook. |
PROD running with 10 threads per instance. I will check in the log for requests rejected. |
If Disservin is correct then the # of threads is irrelevant because of the GIL. EDIT: that’s not quite right. Threads can switch between the execution of byte codes. |
let me add some workers to check this, I need to pause the progression tests for that. |
import threading
import time
class Test:
def __init__(self):
self.task_lock = threading.Lock()
self.task_semaphore = threading.Semaphore(4)
def access_resource(self, thread_id):
if self.task_semaphore.acquire(False):
try:
with self.task_lock:
print(f"Thread {thread_id} is accessing the resource.")
time.sleep(0.001)
finally:
self.task_semaphore.release()
print(f"Thread {thread_id} has released the resource.")
else:
print(f"Thread {thread_id} could not acquire the semaphore, resource is too busy.")
test = Test()
threads = []
for i in range(1000):
thread = threading.Thread(target=test.access_resource, args=(i,))
threads.append(thread)
thread.start()
# Wait for all threads to complete
for thread in threads:
thread.join() |
May 13 21:23:56 tests.stockfishchess.org pserve[26408]: request_task too busy
May 13 21:23:56 tests.stockfishchess.org pserve[26408]: request_task too busy
May 13 21:23:56 tests.stockfishchess.org pserve[26408]: request_task too busy
May 13 21:23:56 tests.stockfishchess.org pserve[26408]: request_task too busy
May 13 21:23:56 tests.stockfishchess.org pserve[26408]: request_task too busy
May 13 21:23:56 tests.stockfishchess.org pserve[26408]: request_task too busy
May 13 21:23:56 tests.stockfishchess.org pserve[26408]: request_task too busy
May 13 21:23:56 tests.stockfishchess.org pserve[26408]: request_task too busy
May 13 21:23:56 tests.stockfishchess.org pserve[26408]: request_task too busy
May 13 21:23:56 tests.stockfishchess.org pserve[26408]: request_task too busy
May 13 21:23:56 tests.stockfishchess.org pserve[26408]: request_task too busy |
just FYI, while the workers are ramping up, initial phase have the following server timings:
2-3min later:
about 9min later
22min later:
34min later:
It is clear that request_task timing grows quickly with the server load:
or
so in the ramp-up phase, with few workers active we see about 14ms for a call, later this is closer to 190ms for a call. |
Well I only mentioned the bits I’ve collected, also a first I’m seeing the prod ini with the threads so take everything with a grain of salt from me |
It's the first proof that waitress threads are for real :) |
as we've done the check, I'll take the workers down. As shown in #2003 (comment) it looks clear that the request_task api is getting increasingly expensive under load from roughly 15 to 190ms per call between the first call of a worker and the second call. |
I think the number of connections in waitress should be equal to the number of threads (so that waitress does not queue connections) and then the value of the request_task semaphore should be adjusted so that Fishtest does not become overloaded. |
For years we ran the server with 500 connection_limit and 16 threads, I didn't measure any improvement with 2000 machines 17000 cores in PROD, so I restored the waitress default 100 connection_limit and 4 threads in this b960e2b In nginx we keep alive 64 connections for upstream to avoid the cost to close and open connections (not sure the 2 config are talking about the same "connection") upstream backend_6543 {
server 127.0.0.1:6543;
keepalive 64;
} |
The thing is that 100 is too big. Waitress will simply queue connections without giving Fishtest a chance to reject them. That's why we are seeing 30s for api calls. They are just queued in waitress. EDIT: I mean too big compared to the number of worker threads. |
We set 30 second in nginx location / {
rewrite ^/$ /tests permanent;
rewrite ^/tests/$ /tests permanent;
proxy_set_header Connection "";
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Country-Code $region;
client_max_body_size 120m;
client_body_buffer_size 128k;
proxy_connect_timeout 5s;
proxy_send_timeout 30s;
proxy_read_timeout 30s; # timeout
proxy_max_temp_file_size 0;
proxy_redirect off;
proxy_http_version 1.1;
proxy_cache cache_zone;
gunzip on;
proxy_pass http://$backends;
location /api/serialize_run/ {
deny all;
}
} |
Yes nginx caps the connection time. But either waitress or Fishtest should reject the connections. It should not be left to nginx. |
It's really the job of nginx. We can set connection limit and rate for host, uri, binary_remote_addr, globally or even for any single path. limit_conn_zone $host zone=perhost:10m;
limit_conn perhost 10;
limit_conn_status 429; |
Well then nginx should limit the number of connections to waitress to the number of threads in Fishtest. Otherwise waitress will just queue the connections. The point is we need to give Fishtest a chance to reject the request_task connections using the semaphore. |
Note that Fishtest will reject the connection immediately. There will be no 30s timeout. |
set 10 connections in nginx (like the waitress threads), tested opening 50 connections concurrently. 2024/05/14 00:52:57 [error] 13428#13428: *790 limiting connections by zone "per_host_conn", client: ***.***.***.***, server: tests.stockfishchess.org, request: "GET /api/active_runs HTTP/2.0", host: "tests.stockfishchess.org"
2024/05/14 00:52:57 [alert] 13429#13429: ignore long locked inactive cache entry 522ba7397b573b51acc828305a6d6a06, count:7
2024/05/14 00:53:59 [error] 13428#13428: *1075 limiting connections by zone "per_host_conn", client: ***.***.***.***, server: tests.stockfishchess.org, request: "GET /api/active_runs HTTP/2.0", host: "tests.stockfishchess.org"
2024/05/14 00:53:59 [error] 13428#13428: *1073 challenging request, client: ***.***.***.***, server: tests.stockfishchess.org, request: "GET /api/active_runs HTTP/2.0", host: "tests.stockfishchess.org" |
set 30 connections in nginx and 10 connections in waitress (like the 10 threads). |
That is of course true. But one can overdo it which leads to buffer bloat https://en.wikipedia.org/wiki/Bufferbloat . Currently we have an ngnix queue, a waitress queue and a queue in Fishtest (of threads waiting for locks to clear) and it becomes hard to understand how they interact. I think the waitress queue should be eliminated since nginx is already queing so that Fishtest can take control. |
Of course I may be wrong but it seems natural to me that if Fishtest can handle say 7 request task calls per second it should be given the opportunity to reject more. Now it does not have that opportunity since it seems to me that most connections are queued by waitress until they are closed by nginx after 30 seconds. |
I really think we should understand and fix why fishtest can only handle that few request_task calls? If we can go by the profile in #2003 (comment) we see that it spends most time in:
so, somehow the 3 of 4 hotspots are checks, can they be written or implemented differently / are they even needed at this point in the code? |
if we can't fix request_task, probably, we should rate limit it at the nginx level, while at the same time modify the worker to catch that condition. Not really as an error (i.e. no other messages sent to the server), but just for the worker to go to sleep and retry a while later, i.e. not really an error condition, rather a signal. |
certainly, it limits concurrent access to that api, but it seems not to hold that this declogs the waitress queue, at least not sufficiently, as also with this config we see the 30s api timeout. While I don't know the exact cause of the api timeouts, the semaphore still allows 100% of server capacity to be consumed by request_task, it doesn't limit the number of calls per second, it just limits the concurrent access to the api. Queues can still grow with other api calls (such as heartbeat or update_task), these can still timeout? |
But there weren’t any 30s timeouts in the latest log you posted #2003 (comment) (158 calls/s). |
Compare this to the log of a similar load (153 calls/s) you posted in the beginning #2003 (comment). |
any of these logs can happen.. with the current config, the calls/s doesn't mean so much, it depends on what is being called (cheap calls like get_version / heartbeat vs expensive calls like request_tasks), and the server max times can change quite drastically within seconds (e.g. different after a large test finishes for example). |
This was the waitress configuration at the time of the semaphore commit, set with these commits c1c086d a932f79: fishtest/fishtest/production.ini Lines 31 to 33 in bced5cc
With the distribution probability of the APIs and the timing of the APIs, we should be able to configure the proper queues lengths (at example, why semaphore 4?). (The comment suggests that at the time fishtest was able to withstand max 500 workers :) ) |
Could we reduce the waitress time limit now from 30s to something lower? 5s? (Maybe 10-15s first) If there are hundreds (or more) workers asking for request_task within a second or two, presumably a queue of ~20 forms in each thread. Does the timeout apply to the ones in the queues, and waitress rejects further requests? I imagine the fishtest "request_task too busy" is pretty quick, but all 5 threads will (trying to) execute request_task sometimes, so the arriving requests could fill the waitress connections pretty fast. EDIT: I should have checked first - the semaphore means only 4 threads can be running request_task fully, 5 threads means one will be rejecting requests if they all get them. Hopefully this can do 100s / second. |
I guess we could increase that semaphore (and thread count to match) to try to work the server harder. It might be most efficient at less than full cpu load, but that could be a bit higher than 4. Try |
Ah, I missed this bit. Maybe I would increase threads slightly (to 8?) to allow other things to happen when lots of request_tasks are arriving. Imagine if an upload happens at the same time. |
At most one thread can enter The bad thing happens if all threads are queuing to enter |
Tested working fine with a fleet of 75k cores 10k machines, Waitress configured with threads=4 and connection_limit=100 HW usage: RAM 72%, primary Waitress instance CPU 85-95% This is a good compromise for threads "privileged" for "request_task" API, and threads free for any other APIs, while keeping Waitress threads=4. Raising the number of Waitress threads uses more RAM, and can requires to lower the MongoDB cache size to avoid the OOM killer. see official-stockfish#2003
Tested working fine with a fleet of 75k cores 10k machines, Waitress configured with threads=4 and connection_limit=100 HW usage: RAM 72%, primary Waitress instance CPU 85-95% This is a good compromise for threads "privileged" for "request_task" API, and threads free for any other APIs, while keeping Waitress threads=4. Raising the number of Waitress threads uses more RAM and can requires lowering the MongoDB cache size to avoid the OOM killer. see official-stockfish#2003
Tested working fine with a fleet of 75k cores 10k machines, Waitress configured with threads=4 and connection_limit=100 HW usage: RAM 72%, primary Waitress instance CPU 85-95% This is a good compromise for threads "privileged" for "request_task" API, and threads free for any other APIs, while keeping Waitress threads=4. Raising the number of Waitress threads uses more RAM and can requires lowering the MongoDB cache size to avoid the OOM killer. see official-stockfish#2003
Tested working fine with a fleet of 75k cores 10k machines, Waitress configured with threads=4 and connection_limit=100 HW usage: RAM 72%, primary Waitress instance CPU 85-95% This is a good compromise for threads "privileged" for "request_task" API, and threads free for any other APIs, while keeping Waitress threads=4. Raising the number of Waitress threads uses more RAM and can requires lowering the MongoDB cache size to avoid the OOM killer. see official-stockfish#2003 Drop old mongodb requirement.
Tested working fine with a fleet of 75k cores 10k machines, Waitress configured with threads=4 and connection_limit=100 HW usage: RAM 72%, primary Waitress instance CPU 85-95% This is a good compromise for threads "privileged" for "request_task" API, and threads free for any other APIs, while keeping Waitress threads=4. Raising the number of Waitress threads uses more RAM and can requires lowering the MongoDB cache size to avoid the OOM killer. see #2003 Drop old mongodb requirement.
Coming back here with some more documentation, and latest server side timings. Since the initial issue, we have made a lot of progress, and also upgraded the server. We had a successful run with 190k+ cores working on tests. Some knowledge gained:
Some typical timings:
note the 30s maximum (which leads to failing workers), but fairly low average time. While a sampled profile under load looks like:
|
I am reading this as the thread handling scheduled tasks being mostly idle. |
that's possible (but this was with a local patch applied |
(also we had the flush_buffers period at a given point at 10s instead of 1s, which did improve the load, but that was prior to this above mentioned change). |
Validating worker api's on the main instance should be cheap (it only invokes some cache lookups). On secondary instances it is more complicated. This currently only applies to |
the secondary instances don't matter to much, the load on the primary instance is what limits. |
Ah. I would have liked to know how much difference this makes. In previous tests The issue with your patch is that it only applies to a single task. If hypothetically we were to split the flush_buffers some way into two tasks flush_buffers1 and flush_ buffers2 then they could still load the task thread 100% even with your patch. I wonder if we should introduce an idle task which would just sleep 0.2s. In that way the task thread would never be 100% loaded. EDIT. No I think that would be a bad idea. |
so the honest answer is that this is very difficult to measure, as the load on the server fluctuates with the type of tests on fishtest, etc. However, visually at that point it seemed to drop from 100 to 50% the load, we did have two instances where the server seemed hanging, the only thing the sampling returned was this:
Not some solid evidence, but well, not so easy to figure out. With that fix in place we didn't observe any hangs. I think the main issue is that with the locks in place, we have no guarantee of what the timings can be. If the locks cause the timings to become longer than the period we have an issue. |
We did measure at a given point (with the patch in place, and no issues on the server) the distribution of the time taken for _do_work:
So, in most cases less than 0.1s, but there were outliers to 2s |
A couple of times we got 100% CPU load on the |
I see. I still think that long tails are harmless but I guess the average duration of One obvious effect that might amplify this is that the |
Oh, that could make quite a difference? |
I think so because the |
Just to keep some numbers for later reference, this is the average time in ms needed server side when the fleet is present (but not overloading) the server (50k cores), as obtained from the worker output (
Post request https://tests.stockfishchess.org:443/api/request_task handled in 1719.69ms (server: 265.24ms)
):corresponding observation on the worker side:
Server-side sampling of the traces, for calls not leading to a lock looks like:
Essentially all stack traces go via request_task.
The text was updated successfully, but these errors were encountered: