You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In fast_carpenter when trying to multiple/divide variables together I have seen high memory usage and jobs taking a very very long time. I was histogramming two variables in a 2.4 GB file in multiprocessing mode with 8 CPU cores and it took 30 seconds. However when I had the variables in an expression (see yaml config below) where they were divided and multiplied fast_carpenter was running for 4 hours and reached a high memory usage of 30 GB before I decided to kill the program as nothing was seeming to happen. I ran both test on the same machine, with the same file and same number of CPU's and in multiprocessing mode. I also tried using coffea:local on the file but nothing seemed to happen after about 10 mins.
In fast_carpenter when trying to multiple/divide variables together I have seen high memory usage and jobs taking a very very long time. I was histogramming two variables in a 2.4 GB file in
multiprocessing
mode with 8 CPU cores and it took 30 seconds. However when I had the variables in an expression (see yaml config below) where they were divided and multiplied fast_carpenter was running for 4 hours and reached a high memory usage of 30 GB before I decided to kill the program as nothing was seeming to happen. I ran both test on the same machine, with the same file and same number of CPU's and inmultiprocessing
mode. I also tried usingcoffea:local
on the file but nothing seemed to happen after about 10 mins.yaml config file:
Versions:
Python 3.8.5
Full `pip freeze` output:
The text was updated successfully, but these errors were encountered: