You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I hope this message finds you well. I am currently exploring your project and am particularly interested in the deep learning aspects of it. I have a specific query regarding the computational resources used for these tasks.
Could you please provide some insights into the GPU specifications that were utilized for running the deep learning code? Specifically, I am interested in:
The model and series of the GPU(s) used (e.g., NVIDIA Tesla, GTX, etc.).
Whether a single GPU was used or if the tasks were distributed across multiple GPUs.
The approximate size of the datasets processed and the complexity of the models, to understand the GPU requirements better.
This information will be incredibly helpful for me to gauge the hardware requirements for similar projects and to better understand the scalability of your approach.
Thank you in advance for your time and assistance. I appreciate the work you are doing and look forward to your response.
The text was updated successfully, but these errors were encountered:
Thanks for your interests to my work.
All experiments were conducted with single NVIDIA RTX A6000 (48GB).
According to previous experience, single RTX 3090 is enough for TCGA-BLCA, TCGA-BRCA, and TCGA-LUAD.
However, for TCGA-GBMLGG and TCGA-UCEC, some patients have multiple WSIs, especially for TCGA-GBMLGG, resulting in OOM.
In such case, you can randomly sample certain number of patches for these special patients to reduce the computational requirements. That will not significantly impact the overall performance.
I hope this message finds you well. I am currently exploring your project and am particularly interested in the deep learning aspects of it. I have a specific query regarding the computational resources used for these tasks.
Could you please provide some insights into the GPU specifications that were utilized for running the deep learning code? Specifically, I am interested in:
This information will be incredibly helpful for me to gauge the hardware requirements for similar projects and to better understand the scalability of your approach.
Thank you in advance for your time and assistance. I appreciate the work you are doing and look forward to your response.
The text was updated successfully, but these errors were encountered: