Skip to content
This repository has been archived by the owner on Aug 16, 2024. It is now read-only.

Commit

Permalink
Refactoring entrypoints
Browse files Browse the repository at this point in the history
  • Loading branch information
mikecovlee committed Jul 23, 2024
1 parent ba52ecd commit 5cdd053
Show file tree
Hide file tree
Showing 29 changed files with 208 additions and 769 deletions.
11 changes: 2 additions & 9 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -166,15 +166,8 @@ cython_debug/
__pycache__/
*.egg-info/
*.egg

data/*
!data/AlpacaDataCleaned/
template/*
!data/data_demo.json
!data/dummy_data.json
!template/test_data_demo.json
!template/template_demo.json
data_train.json
mlora.json
mlora_train_*.json

# macOS junk files
.DS_Store
Expand Down
1 change: 0 additions & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,6 @@ RUN . ~/.bashrc \
&& cd /mLoRA \
&& pyenv virtualenv $PYTHON_VERSION mlora \
&& pyenv local mlora \
&& pip install torch==2.3.1 \
&& pip install -r ./requirements.txt

WORKDIR /mLoRA
Expand Down
16 changes: 12 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -121,13 +121,15 @@ You can conveniently utilize m-LoRA via `launch.py`. The following example demon

```bash
# Generating configuration
python launch.py gen --template lora --tasks ./data/dummy_data.json
python launch.py gen --template lora --tasks ./tests/dummy_data.json

# Running the training task
python launch.py run --base_model TinyLlama/TinyLlama_v1.1

# Try with gradio web ui
python inference.py \
--base_model TinyLlama/TinyLlama_v1.1 \
--template ./template/alpaca.json \
--template alpaca \
--lora_weights ./casual_0
```

Expand All @@ -140,15 +142,21 @@ python launch.py help
## m-LoRA

The `mlora.py` code is a starting point for finetuning on various datasets.

Basic command for finetuning a baseline model on the [Alpaca Cleaned](https://github.com/gururise/AlpacaDataCleaned) dataset:
```bash
# Generating configuration
python launch.py gen \
--template lora \
--tasks yahma/alpaca-cleaned

python mlora.py \
--base_model meta-llama/Llama-2-7b-hf \
--config ./config/alpaca.json \
--config mlora.json \
--bf16
```

You can check the template finetune configuration in [template](./template/) folder.
You can check the template finetune configuration in [templates](./templates/) folder.

For further detailed usage information, please use `--help` option:
```bash
Expand Down
59 changes: 0 additions & 59 deletions config/alpaca.json

This file was deleted.

89 changes: 0 additions & 89 deletions config/alpaca_mixlora.json

This file was deleted.

59 changes: 0 additions & 59 deletions config/dummy.json

This file was deleted.

57 changes: 0 additions & 57 deletions config/dummy_glm.json

This file was deleted.

Loading

0 comments on commit 5cdd053

Please sign in to comment.