You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Dear developer:
SEVtras is a great algorithm! Thank you for developing it. I want to use it in my research but when I run larger files using SEVtras.sEV_regconizer the process always be interrupted without any warning and the out_path didn't exist any files. Now my device has 16 cores and 32 threads and 128G memory, I don't know is it enough? I run this algorithm using Jupyterlab in Linux and the input file is the output of cellranger. I have check the reason and I found when the algorithm would spent a lot of time running multi_enrich and the process would suddenly stop work. Are there any time limitation or device limitation?
Thank you very much!
The text was updated successfully, but these errors were encountered:
SEVtras was developed based on the size of normal 10X datasets. 128GB is already enough for SEVtras. If you run SEVtras with files much larger than normal 10X, it would take a lot of memory for SEVtras to complete. I suggest you look at the memory usage when the process suddenly interprets. If this is the case, you can subsample the data and run SEVtras again.
If the memory is not an issue, I suggest you run SEVtras in native Linux by command line.
Dear developer:
SEVtras is a great algorithm! Thank you for developing it. I want to use it in my research but when I run larger files using
SEVtras.sEV_regconizer
the process always be interrupted without any warning and the out_path didn't exist any files. Now my device has 16 cores and 32 threads and 128G memory, I don't know is it enough? I run this algorithm using Jupyterlab in Linux and the input file is the output of cellranger. I have check the reason and I found when the algorithm would spent a lot of time runningmulti_enrich
and the process would suddenly stop work. Are there any time limitation or device limitation?Thank you very much!
The text was updated successfully, but these errors were encountered: