You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, excuse me. When I ran the inference and training scripts, I specified the CUDA ID, but it always defaulted to cuda=0. How should I solve this? In short, an error was reported: torch.distributed.elastic .mutiprocessing.errors.ChildFailedError:torch.distributed.elastic.mutiprocessing.errors.ChildFailedError
I believe there is a straightforward implementation for multi-GPU support. You can wrap the existing script with an outer script that handles the splitting of the test set and passes the GPU IDs accordingly. This approach is similar to what FunASR did previously.
我相信多 GPU 支持有一个简单的实现。您可以使用外部脚本包装现有脚本,该脚本处理测试集的拆分并相应地传递 GPU ID。这种方法类似于 FunASR 之前所做的。
Thank you very much. The problem about specifying a certain card for testing has been solved. I have encountered another problem now. I directly used the SLAM framework to fine-tune the inference results. Why haven’t I directly used the whisper open source model to test the results?
Originally posted by @ddlBoJack in #100 (comment)
The text was updated successfully, but these errors were encountered: