Skip to content

Commit

Permalink
Merge pull request #31 from FarukhS52/main
Browse files Browse the repository at this point in the history
[Docs] : Fix typo
  • Loading branch information
aria-hacker authored Oct 16, 2024
2 parents a0b0690 + 0cefdfc commit 719ff4e
Show file tree
Hide file tree
Showing 3 changed files with 4 additions and 4 deletions.
2 changes: 1 addition & 1 deletion examples/nextqa/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ CUDA_VISIBLE_DEVICES=0 python aria/train.py --config examples/nextqa/config_lora
```

## Full Params
Full paramater finetuning is feasible with 8 H100 GPUs, using `ZeRO3` and `Offload Parameter`. The command is as following:
Full parameter finetuning is feasible with 8 H100 GPUs, using `ZeRO3` and `Offload Parameter`. The command is as following:
```bash
accelerate launch --config_file recipes/accelerate_configs/zero3_offload.yaml aria/train.py --config examples/nextqa/config_full.yaml --output_dir [YOUR_OUT_DIR]
```
Expand Down
2 changes: 1 addition & 1 deletion examples/nlvr2/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ CUDA_VISIBLE_DEVICES=0 python aria/train.py --config examples/nlvr2/config_lora.
```

## Full Params
Full paramater finetuning is feasible with 8 H100 GPUs, using `ZeRO3` and `Offload Parameter`. The command is as following:
Full parameter finetuning is feasible with 8 H100 GPUs, using `ZeRO3` and `Offload Parameter`. The command is as following:
```bash
accelerate launch --config_file recipes/accelerate_configs/zero3_offload.yaml aria/train.py --config examples/nlvr2/config_full.yaml --max_image_size 980 --output_dir [YOUR_OUT_DIR]
```
Expand Down
4 changes: 2 additions & 2 deletions examples/refcoco/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ CUDA_VISIBLE_DEVICES=0 python aria/train.py --config examples/refcoco/config_lor
```

## Full Params
Full paramater finetuning is feasible with 8 H100 GPUs, using `ZeRO3` and `Offload Parameter`. The command is as following:
Full parameter finetuning is feasible with 8 H100 GPUs, using `ZeRO3` and `Offload Parameter`. The command is as following:
```bash
accelerate launch --config_file recipes/accelerate_configs/zero3_offload.yaml aria/train.py --config examples/refcoco/config_full.yaml --output_dir [YOUR_OUT_DIR]
```
Expand Down Expand Up @@ -79,4 +79,4 @@ These are the loss curves of `LoRA Finetuning` (left) and `Full Params Finetunin
<div style="display: flex; justify-content: space-between;">
<img src="../../assets/refcoco_loss_lora.png" alt="Left Image" style="width: 48%;">
<img src="../../assets/refcoco_loss_full.png" alt="Right Image" style="width: 48%;">
</div>
</div>

0 comments on commit 719ff4e

Please sign in to comment.