Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

i only have 1 gpu , how can i run (bash dist_finetune.sh ...) #1

Open
565ee opened this issue May 8, 2023 · 1 comment
Open

i only have 1 gpu , how can i run (bash dist_finetune.sh ...) #1

565ee opened this issue May 8, 2023 · 1 comment

Comments

@565ee
Copy link

565ee commented May 8, 2023

thank you very much for your MFM !!!

when i run (bash dist_finetune.sh ...) , get error
how can i run (bash dist_finetune.sh ...) with only 1 gpu , not multi gpu ?

/opt/conda/envs/py3.9_cuda11.8/lib/python3.9/site-packages/torch/distributed/launch.py:181: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torchrun.
Note that --use-env is set by default in torchrun.
If your script expects `--local-rank` argument to be set, please
change it to read from `os.environ['LOCAL_RANK']` instead. See
https://pytorch.org/docs/stable/distributed.html#launch-utility for
further instructions

  warnings.warn(
': [Errno 2] No such file or directoryhon: can't open file '/mnt/d/Software/AI/mfm/2304/mfm_1/
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 2) local_rank: 0 (pid: 80) of binary: /opt/conda/envs/py3.9_cuda11.8/bin/python
Traceback (most recent call last):
  File "/opt/conda/envs/py3.9_cuda11.8/lib/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/opt/conda/envs/py3.9_cuda11.8/lib/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/opt/conda/envs/py3.9_cuda11.8/lib/python3.9/site-packages/torch/distributed/launch.py", line 196, in <module>
    main()
  File "/opt/conda/envs/py3.9_cuda11.8/lib/python3.9/site-packages/torch/distributed/launch.py", line 192, in main
    launch(args)
  File "/opt/conda/envs/py3.9_cuda11.8/lib/python3.9/site-packages/torch/distributed/launch.py", line 177, in launch
    run(args)
  File "/opt/conda/envs/py3.9_cuda11.8/lib/python3.9/site-packages/torch/distributed/run.py", line 785, in run
    elastic_launch(
  File "/opt/conda/envs/py3.9_cuda11.8/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 133, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/opt/conda/envs/py3.9_cuda11.8/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 249, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
 FAILED
------------------------------------------------------------
Failures:
  <NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2023-05-09_06:58:55
  host      : LZH2.localdomain
  rank      : 0 (local_rank: 0)
  exitcode  : 2 (pid: 80)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
dist_finetune.sh: line 10: main_finetune.py: command not found
dist_finetune.sh: line 11: --cfg: command not found
@Jiahao000
Copy link
Owner

Hi, you can set ​GPUS=1 in dist_finetune.sh and adjust the batch size (set --batch-size) accordingly. You may also need to accumulate gradients (set --accumulation-steps) if there is not enough GPU memory.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants