You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# it looks like that's actually slower than parallelizing over corpora, for some
.
I found that pooling did result in a 2x speed up of the run.
Without parallel:
python run_mindep.py run en fr 866.40s user 0.48s system 99% cpu 14:28.04 total
python run_mindep.py run en fr 893.17s user 0.53s system 99% cpu 14:55.14 total
python run_mindep.py run en fr 905.34s user 0.56s system 99% cpu 15:08.00 total
With parallel (pmap):
python run_mindep.py run en fr 404.78s user 13.91s system 48% cpu 14:23.18 total
python run_mindep.py run en fr 410.19s user 14.25s system 47% cpu 15:01.91 total
python run_mindep.py run en fr 418.29s user 14.64s system 54% cpu 13:09.16 total
This was ran on "Intel(R) Core(TM) i5-4200U CPU @ 1.60GHz", quadcore.
I think the run could be ~an order of magnitude faster by inserting several numba @jits to deptransform/depgraph. So far I had tested with @jit-ing gen_row but didn't observe any speed up.
The text was updated successfully, but these errors were encountered:
It's definitely very possible to make this faster with numba. It's currently set up to work with pypy, and that's what I've been using when speed becomes a bottleneck. It's possible that numba would provide a better speed/simplicity tradeoff but I haven't felt hindered by speed as it is.
I was trying to reproduce the result of
cliqs/run_mindep.py
Line 49 in 1c72b06
I found that pooling did result in a 2x speed up of the run.
Without parallel:
With parallel (pmap):
This was ran on "Intel(R) Core(TM) i5-4200U CPU @ 1.60GHz", quadcore.
I think the run could be ~an order of magnitude faster by inserting several numba
@jit
s to deptransform/depgraph. So far I had tested with@jit
-inggen_row
but didn't observe any speed up.The text was updated successfully, but these errors were encountered: