You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
when i run (bash dist_finetune.sh ...) , get error
how can i run (bash dist_finetune.sh ...) with only 1 gpu , not multi gpu ?
/opt/conda/envs/py3.9_cuda11.8/lib/python3.9/site-packages/torch/distributed/launch.py:181: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torchrun.
Note that --use-env is set by default in torchrun.
If your script expects `--local-rank` argument to be set, please
change it to read from `os.environ['LOCAL_RANK']` instead. See
https://pytorch.org/docs/stable/distributed.html#launch-utility for
further instructions
warnings.warn(
': [Errno 2] No such file or directoryhon: can't open file '/mnt/d/Software/AI/mfm/2304/mfm_1/
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 2) local_rank: 0 (pid: 80) of binary: /opt/conda/envs/py3.9_cuda11.8/bin/python
Traceback (most recent call last):
File "/opt/conda/envs/py3.9_cuda11.8/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/opt/conda/envs/py3.9_cuda11.8/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/opt/conda/envs/py3.9_cuda11.8/lib/python3.9/site-packages/torch/distributed/launch.py", line 196, in <module>
main()
File "/opt/conda/envs/py3.9_cuda11.8/lib/python3.9/site-packages/torch/distributed/launch.py", line 192, in main
launch(args)
File "/opt/conda/envs/py3.9_cuda11.8/lib/python3.9/site-packages/torch/distributed/launch.py", line 177, in launch
run(args)
File "/opt/conda/envs/py3.9_cuda11.8/lib/python3.9/site-packages/torch/distributed/run.py", line 785, in run
elastic_launch(
File "/opt/conda/envs/py3.9_cuda11.8/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 133, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/opt/conda/envs/py3.9_cuda11.8/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 249, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2023-05-09_06:58:55
host : LZH2.localdomain
rank : 0 (local_rank: 0)
exitcode : 2 (pid: 80)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
dist_finetune.sh: line 10: main_finetune.py: command not found
dist_finetune.sh: line 11: --cfg: command not found
The text was updated successfully, but these errors were encountered:
Hi, you can set GPUS=1 in dist_finetune.sh and adjust the batch size (set --batch-size) accordingly. You may also need to accumulate gradients (set --accumulation-steps) if there is not enough GPU memory.
thank you very much for your MFM !!!
when i run (bash dist_finetune.sh ...) , get error
how can i run (bash dist_finetune.sh ...) with only 1 gpu , not multi gpu ?
The text was updated successfully, but these errors were encountered: