diff --git a/ARCHITECTURE.md b/ARCHITECTURE.md index ee13628c2..75e884e81 100644 --- a/ARCHITECTURE.md +++ b/ARCHITECTURE.md @@ -3,7 +3,7 @@ ## Overview -The `dstack` toolkit consists of five major components: +The `dstack` platform consists of five major components: * the server * the Python API diff --git a/README.md b/README.md index 609251db6..0d1af9acd 100644 --- a/README.md +++ b/README.md @@ -9,7 +9,7 @@

-Orchestrate GPU workloads across clouds +Effortlessly train and deploy generative AI

@@ -23,7 +23,7 @@ Orchestrate GPU workloads across clouds [![PyPI - License](https://img.shields.io/pypi/l/dstack?style=flat-square&color=blue)](https://github.com/dstackai/dstack/blob/master/LICENSE.md) -`dstack` is an open-source toolkit for training, fine-tuning, and deployment of +`dstack` is an open-source platform for training, fine-tuning, and deployment of generative AI models across various cloud providers (e.g., AWS, GCP, Azure, Lambda Cloud, etc.) ## Latest news ✨ @@ -57,7 +57,7 @@ $ pip install "dstack[all]" -U If you have default AWS, GCP, or Azure credentials on your machine, `dstack` will pick them up automatically. Otherwise, you need to manually specify the cloud credentials in `~/.dstack/server/config.yml`. -For further cloud configuration details, refer to [Clouds](https://dstack.ai/docs/guides/clouds.md). +For further cloud configuration details, refer to [Clouds](https://dstack.ai/docs/configuration/server). ### Start the server diff --git a/docs/assets/stylesheets/extra.css b/docs/assets/stylesheets/extra.css index c1a9d37c9..fedc45d9e 100644 --- a/docs/assets/stylesheets/extra.css +++ b/docs/assets/stylesheets/extra.css @@ -5,7 +5,8 @@ } [dir=ltr] .md-header__source { - margin-left: 0.2rem; + margin-left: 0; + width: 10rem; } .md-source__facts { @@ -352,6 +353,13 @@ h4.doc-heading { margin: 0 4px; } +.md-typeset :is(h2, h3, h4) > code { + background-color: inherit; + color: inherit; + padding: 0; + margin: 0; +} + .md-typeset :not(td, pre, h2, h3, h4) > code { font-size: 0.65rem; } @@ -993,8 +1001,8 @@ html .md-footer-meta.md-typeset a:is(:focus,:hover) { } .md-tabs__item:nth-child(n+5) .md-tabs__link:before { - width: 30px; - height: 30px; + width: 38px; + height: 38px; margin-top: 6px; visibility: visible; } @@ -1005,8 +1013,8 @@ html .md-footer-meta.md-typeset a:is(:focus,:hover) { } .md-tabs__item:nth-child(n+6) .md-tabs__link:before { - width: 30px; - height: 30px; + width: 38px; + height: 38px; margin-top: 6px; visibility: visible; } @@ -1024,11 +1032,11 @@ html .md-footer-meta.md-typeset a:is(:focus,:hover) { }*/ .md-tabs__item:nth-child(5) .md-tabs__link:before { - content: url('data:image/svg+xml,'); + content: url('data:image/svg+xml,'); } .md-tabs__item:nth-child(6) .md-tabs__link:before { - content: url('data:image/svg+xml,'); + content: url('data:image/svg+xml,'); } .md-tabs__link { @@ -1080,10 +1088,16 @@ html .md-footer-meta.md-typeset a:is(:focus,:hover) { opacity: 1; } +[dir=ltr] .md-source__icon+.md-source__repository { + margin-left: -2.6rem !important; +} + .md-source__icon.md-icon svg { - height: 1.4rem; - width: 1.4rem; + height: 1.38rem; + width: 1.38rem; fill: none; + margin-left: -0.1rem; + margin-top: 0.61rem; } .md-source__facts { @@ -1097,6 +1111,7 @@ html .md-footer-meta.md-typeset a:is(:focus,:hover) { @media screen and (min-width: 76.25em) { .md-search .md-search__inner { padding-top: 0.2rem; + margin-right: 0.8rem; } [data-md-toggle=search]:checked ~ .md-header .md-search__inner, .md-search__scrollwrap { diff --git a/docs/assets/stylesheets/landing.css b/docs/assets/stylesheets/landing.css index a1e681077..3caabc594 100644 --- a/docs/assets/stylesheets/landing.css +++ b/docs/assets/stylesheets/landing.css @@ -120,7 +120,7 @@ position: relative; left: 50%; transform: translateX(-50%); - margin-top: 3rem; + margin-top: 2.5rem; padding-top: 4.5rem; padding-bottom: 4.5rem; /*border-top-left-radius: 2.5rem;*/ diff --git a/docs/blog/posts/simplified-cloud-setup.md b/docs/blog/posts/simplified-cloud-setup.md index dabfbd0a6..1db7bd614 100644 --- a/docs/blog/posts/simplified-cloud-setup.md +++ b/docs/blog/posts/simplified-cloud-setup.md @@ -40,7 +40,7 @@ projects: Regions and other settings are optional. Learn more on what credential types are supported -via [Clouds](../../docs/guides/clouds.md). +via [Clouds](../../docs/configuration/server.md). ## Enhanced API @@ -98,7 +98,7 @@ This means you'll need to delete `~/.dstack` and configure `dstack` from scratch 1. `pip install "dstack[all]==0.12.0"` 2. Delete `~/.dstack` -3. Configure clouds via `~/.dstack/server/config.yml` (see the [new guide](../../docs/guides/clouds.md)) +3. Configure clouds via `~/.dstack/server/config.yml` (see the [new guide](../../docs/configuration/server.md)) 4. Run `dstack server` The [documentation](../../docs/index.md) and [examples](../../examples/index.md) are updated. diff --git a/docs/docs/guides/clouds.md b/docs/docs/configuration/server.md similarity index 92% rename from docs/docs/guides/clouds.md rename to docs/docs/configuration/server.md index 03d849f8c..5b9c1c051 100644 --- a/docs/docs/guides/clouds.md +++ b/docs/docs/configuration/server.md @@ -1,9 +1,12 @@ -# Clouds +# Server configuration -For every project, `dstack` allows you to configure and use multiple cloud accounts. +The `dstack` server manages your workloads' state and orchestrates them across configured cloud providers. -To configure a cloud account, provide its credentials and other settings via `~/.dstack/server/config.yml` -under the `backends` property of the respective project. +For flexibility, the server allows you to configure multiple projects and users. Within each project, you can set up +multiple cloud accounts. + +To configure a cloud account, specify its settings in `~/.dstack/server/config.yml` under the `backends` property +of the respective project. Example: @@ -28,7 +31,7 @@ projects: [//]: # (If you run the `dstack` server without creating `~/.dstack/server/config.yml`, `dstack` will attempt to automatically detect the) [//]: # (default credentials for AWS, GCP, and Azure and create the configuration.) -## Credentials +## Cloud credentials ### AWS @@ -249,7 +252,7 @@ projects: -## Other settings +## Cloud regions In addition to credentials, each cloud optionally allows for region configuration. diff --git a/docs/docs/guides/fine-tuning.md b/docs/docs/guides/fine-tuning.md index 7906d1f74..bf4b09f78 100644 --- a/docs/docs/guides/fine-tuning.md +++ b/docs/docs/guides/fine-tuning.md @@ -11,6 +11,8 @@ Hugging Face model with SFT or DPO techniques in your cloud with just one line o pip install "dstack[all]==0.12.1rc1" ``` + Also, make sure you've configured clouds and started the server. + First, you connect to the `dstack` server: ```python @@ -29,7 +31,6 @@ from dstack.api.huggingface import SFTFineTuningTask task = SFTFineTuningTask(model_name="NousResearch/Llama-2-13b-hf", dataset_name="peterschmidt85/samsum", - new_model_name="Llama-2-13b-samsum", num_train_epochs=2, env={ "`HUGGING_FACE_HUB_TOKEN`": "...", @@ -45,16 +46,13 @@ And finally, submit the task: from dstack.api import Resources, GPU run = client.runs.submit( - run_name="Llama-2-13b-samsum", + run_name="Llama-2-13b-samsum", # (Optional) If unset, its chosen randomly configuration=task, resources=Resources(gpu=GPU(memory="24GB", count=4)), ) ``` -`dstack` automatically provisions necessary resources in the configured cloud, does training, and pushes -the final model to the Hugging Face hub. - -## Integrations +When submitting a task, you can configure resources, along with [many other options](../../docs/reference/api/python/index.md#dstack.api.RunCollection.submit). To track experiment metrics, specify `report_to` and related authentication environment variables. Currently, the API supports `"tensorboard"` and `"wandb"`: @@ -62,7 +60,6 @@ supports `"tensorboard"` and `"wandb"`: ```python task = SFTFineTuningTask(model_name="NousResearch/Llama-2-13b-hf", dataset_name="peterschmidt85/samsum", - new_model_name="Llama-2-13b-samsum", num_train_epochs=2, report_to="wandb", env={ @@ -74,7 +71,7 @@ task = SFTFineTuningTask(model_name="NousResearch/Llama-2-13b-hf", [//]: # (TODO: Add W&B screenshot) -You can use the [methods](../../docs/reference/api/python/index.md#dstack.api.Client) on `dstack.api.Client` to manage your runs, including getting a list of runs, stopping a given +You can use the [methods](../../docs/reference/api/python/index.md#dstack.api.Client) on `client` to manage your runs, including getting a list of runs, stopping a given run, etc. -The `dstack.api.Client.runs.submit` allows for configuring resources as well as [many other options](../../docs/reference/api/python/index.md#dstack.api.RunCollection.submit). \ No newline at end of file +When the training is done, `dstack` pushes the final model to the Hugging Face hub. \ No newline at end of file diff --git a/docs/docs/index.md b/docs/docs/index.md index 33afd0c24..aba786ef7 100644 --- a/docs/docs/index.md +++ b/docs/docs/index.md @@ -1,11 +1,11 @@ # Quickstart -`dstack` is an open-source toolkit for training, fine-tuning, and deployment of +`dstack` is an open-source platform for training, fine-tuning, and deployment of generative AI models across various cloud providers. (1) { .annotate } 1. You can use various cloud accounts (e.g., AWS, GCP, Azure, Lambda Cloud) by configuring - their credentials. The framework can optimize costs by running workloads across multiple + their credentials. The platform can optimize costs by running workloads across multiple regions and cloud accounts. ## Set up the server @@ -31,12 +31,12 @@ $ pip install "dstack[all]" -U Another way to install the server is through [Docker](https://hub.docker.com/r/dstackai/dstack). -### Configure clouds +### Configure the server -If you have default AWS, GCP, or Azure credentials on your machine, `dstack` will pick them up automatically. +If you have default AWS, GCP, or Azure credentials on your machine, the `dstack` server will pick them up automatically. Otherwise, you need to manually specify the cloud credentials in `~/.dstack/server/config.yml`. -For further cloud configuration details, refer to [Clouds](guides/clouds.md). +For further details, refer to [server configuration](configuration/server.md). ### Start the server diff --git a/docs/docs/installation/docker.md b/docs/docs/installation/docker.md index 9fb431d7f..8ac45e8f7 100644 --- a/docs/docs/installation/docker.md +++ b/docs/docs/installation/docker.md @@ -14,7 +14,7 @@ $ docker run --name dstack -p <port-on-host>:3000 \ !!! info "Configure clouds" Upon startup, the server sets up the default project called `main`. - Prior to using `dstack`, make sure to [configure clouds](../guides/clouds.md#configure-backends). + Prior to using `dstack`, make sure to [configure clouds](../configuration/server.md). ## Environment variables diff --git a/docs/docs/installation/pip.md b/docs/docs/installation/pip.md index 39f8ee10c..32e36e859 100644 --- a/docs/docs/installation/pip.md +++ b/docs/docs/installation/pip.md @@ -15,4 +15,4 @@ The server is available at http://127.0.0.1:3000?token=b934d226-e24a-4eab-eb92b3 !!! info "Configure clouds" Upon startup, the server sets up the default project called `main`. - Prior to using `dstack`, make sure to [configure clouds](../guides/clouds.md#configure-backends). \ No newline at end of file + Prior to using `dstack`, make sure to [configure clouds](../configuration/server.md). \ No newline at end of file diff --git a/docs/docs/reference/api/python/index.md b/docs/docs/reference/api/python/index.md index ba1d4f3d7..d2401bebf 100644 --- a/docs/docs/reference/api/python/index.md +++ b/docs/docs/reference/api/python/index.md @@ -2,9 +2,9 @@ The Python API allows for running tasks, services, and managing runs programmatically. -## dstack.api { #dstack.api } +## `dstack.api` { #dstack.api data-toc-label="dstack.api" } -### dstack.api.Client { #dstack.api.Client data-toc-label="Client" } +### `dstack.api.Client` { #dstack.api.Client data-toc-label="Client" } ::: dstack.api.Client options: @@ -12,7 +12,7 @@ The Python API allows for running tasks, services, and managing runs programmati show_root_toc_entry: false heading_level: 4 -### dstack.api.Task { #dstack.api.Task data-toc-label="Task" } +### `dstack.api.Task` { #dstack.api.Task data-toc-label="Task" } ::: dstack.api.Task options: @@ -21,7 +21,7 @@ The Python API allows for running tasks, services, and managing runs programmati show_root_toc_entry: false heading_level: 4 -### dstack.api.Service { #dstack.api.Service data-toc-label="Service" } +### `dstack.api.Service` { #dstack.api.Service data-toc-label="Service" } ::: dstack.api.Service options: @@ -30,7 +30,7 @@ The Python API allows for running tasks, services, and managing runs programmati show_root_toc_entry: false heading_level: 4 -### dstack.api.Run { ##dstack.api.Run data-toc-label="Run" } +### `dstack.api.Run` { ##dstack.api.Run data-toc-label="Run" } ::: dstack.api.Run options: @@ -39,7 +39,7 @@ The Python API allows for running tasks, services, and managing runs programmati show_root_toc_entry: false heading_level: 4 -### dstack.api.Client.runs { #dstack.api.Client.runs data-toc-label="runs" } +### `dstack.api.Client.runs` { #dstack.api.Client.runs data-toc-label="runs" } ::: dstack.api.RunCollection options: @@ -47,7 +47,7 @@ The Python API allows for running tasks, services, and managing runs programmati show_root_toc_entry: false heading_level: 4 -### dstack.api.Client.repos { #dstack.api.Client.repos data-toc-label="repos" } +### `dstack.api.Client.repos` { #dstack.api.Client.repos data-toc-label="repos" } ::: dstack.api.RepoCollection options: @@ -55,7 +55,7 @@ The Python API allows for running tasks, services, and managing runs programmati show_root_toc_entry: false heading_level: 4 -### dstack.api.Client.backends { #dstack.api.Client.backends data-toc-label="backends" } +### `dstack.api.Client.backends` { #dstack.api.Client.backends data-toc-label="backends" } ::: dstack.api.BackendCollection options: @@ -63,9 +63,9 @@ The Python API allows for running tasks, services, and managing runs programmati show_root_toc_entry: false heading_level: 4 -## dstack.api.huggingface +## `dstack.api.huggingface` { #dstack.api.huggingface data-toc-label="dstack.api.huggingface" } -### dstack.api.huggingface.SFTFineTuningTask { #dstack.api.huggingface.SFTFineTuningTask data-toc-label="SFTFineTuningTask" } +### `dstack.api.huggingface.SFTFineTuningTask` { #dstack.api.huggingface.SFTFineTuningTask data-toc-label="SFTFineTuningTask" } ::: dstack.api.huggingface.SFTFineTuningTask options: diff --git a/docs/examples/stable-diffusion-xl.md b/docs/examples/stable-diffusion-xl.md index 05aa3063c..bdb7efe38 100644 --- a/docs/examples/stable-diffusion-xl.md +++ b/docs/examples/stable-diffusion-xl.md @@ -191,7 +191,7 @@ commands: ## Run the configuration !!! warning "NOTE:" - Before running a service, ensure that you have configured a [gateway](../docs/guides/clouds.md#configuring-gateways). + Before running a service, ensure that you have configured a [gateway](../docs/guides/services.md#set-up-a-gateway). After the gateway is configured, go ahead run the service. @@ -204,7 +204,7 @@ $ dstack run . -f stable-diffusion-xl/api.dstack.yml !!! info "Endpoint URL" - If you've configured a [wildcard domain](../docs/guides/clouds.md#configuring-gateways) for the gateway, + If you've configured a [wildcard domain](../docs/guides/services.md#set-up-a-gateway) for the gateway, `dstack` enables HTTPS automatically and serves the service at `https://.`. diff --git a/docs/examples/text-generation-inference.md b/docs/examples/text-generation-inference.md index 3716f44d3..606d99ba1 100644 --- a/docs/examples/text-generation-inference.md +++ b/docs/examples/text-generation-inference.md @@ -35,7 +35,7 @@ commands: ## Run the configuration !!! warning "Gateway" - Before running a service, ensure that you have configured a [gateway](../docs/guides/clouds.md#configuring-gateways). + Before running a service, ensure that you have configured a [gateway](../docs/guides/services.md#set-up-a-gateway).

@@ -46,7 +46,7 @@ $ dstack run . -f text-generation-inference/serve.dstack.yml --gpu 24GB
!!! info "Wildcard domain" - If you've configured a [wildcard domain](../docs/guides/clouds.md#configuring-gateways) for the gateway, + If you've configured a [wildcard domain](../docs/guides/services.md#set-up-a-gateway) for the gateway, `dstack` enables HTTPS automatically and serves the service at `https://.`. diff --git a/docs/examples/vllm.md b/docs/examples/vllm.md index 5095c12db..45897c3f4 100644 --- a/docs/examples/vllm.md +++ b/docs/examples/vllm.md @@ -36,7 +36,7 @@ commands: ## Run the configuration !!! warning "Gateway" - Before running a service, ensure that you have configured a [gateway](../docs/guides/clouds.md#configuring-gateways). + Before running a service, ensure that you have configured a [gateway](../docs/guides/services.md#set-up-a-gateway).
@@ -47,7 +47,7 @@ $ dstack run . -f vllm/serve.dstack.yml --gpu 24GB
!!! info "Wildcard domain" - If you've configured a [wildcard domain](../docs/guides/clouds.md#configuring-gateways) for the gateway, + If you've configured a [wildcard domain](../docs/guides/services.md#set-up-a-gateway) for the gateway, `dstack` enables HTTPS automatically and serves the service at `https://.`. diff --git a/docs/index.md b/docs/index.md index 324d17e65..b34ad3948 100644 --- a/docs/index.md +++ b/docs/index.md @@ -1,6 +1,6 @@ --- template: home.html -title: Orchestrate GPU workloads across clouds +title: Effortlessly train and deploy generative AI hide: - navigation - toc diff --git a/docs/overrides/.icons/custom/github.svg b/docs/overrides/.icons/custom/github.svg index 99d4a16dd..fe24d0e0d 100644 --- a/docs/overrides/.icons/custom/github.svg +++ b/docs/overrides/.icons/custom/github.svg @@ -1 +1 @@ - \ No newline at end of file + \ No newline at end of file diff --git a/docs/overrides/header.html b/docs/overrides/header.html index 806186f0e..b92a9a62e 100644 --- a/docs/overrides/header.html +++ b/docs/overrides/header.html @@ -47,12 +47,17 @@ {% endif %} {% if "material/search" in config.plugins %} + {% endif %} -
- Open-source - Cloud GPU + {% if config.repo_url %} +
+ {% include "partials/source.html" %}
{% endif %} + {% if "navigation.tabs.sticky" in features %} {% if "navigation.tabs" in features %} diff --git a/docs/overrides/home.html b/docs/overrides/home.html index 9525e792f..8c52795d0 100644 --- a/docs/overrides/home.html +++ b/docs/overrides/home.html @@ -125,18 +125,19 @@
-

An easier way to train and deploy +

Effortlessly train and deploy generative AI

- Effortlessly train and deploy generative AI models with dstack, on your cloud or using our cloud GPU. + Train and deploy generative AI models on any cloud with a few lines of code. + Use your own cloud account, or cloud GPU provided by dstack.

- Install open-source + Install open-source
@@ -147,13 +148,21 @@

An easier way to train and - Use with your own cloud accounts + Use your own cloud accounts

- + Join the community + + +
+ Get support and learn from
the community +
+ +
@@ -368,14 +377,14 @@

Get started in less than a minute

- Done! Configure clouds, and use the CLI or API to train + Done! Configure clouds, and use the CLI or Python API to train and deploy generative AI models.

Install open-source - - Request cloud GPU + + Join the community
diff --git a/mkdocs.yml b/mkdocs.yml index b7be44347..b76383203 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -74,8 +74,9 @@ plugins: 'docs/quick-start.md': 'docs/index.md' 'docs/installation/index.md': 'docs/index.md' 'tutorials/stable-diffusion.md': 'examples/stable-diffusion-xl.md' - 'docs/guides/projects.md': 'docs/guides/clouds.md' + 'docs/guides/projects.md': 'configuration/server.md' 'examples/python-api.md': 'examples/deploy-python.md' + 'guides/clouds.md': 'configuration/server.md' - typeset - gen-files: scripts: # always relative to mkdocs.yml @@ -154,8 +155,9 @@ nav: # - Installation: # - pip: docs/installation/pip.md # - Docker: docs/installation/docker.md + - Configuration: + - Server configuration: docs/configuration/server.md - Guides: - - Clouds: docs/guides/clouds.md - Dev environments: docs/guides/dev-environments.md - Tasks: docs/guides/tasks.md - Services: docs/guides/services.md diff --git a/setup.py b/setup.py index 1b2540c79..1c1bfd781 100644 --- a/setup.py +++ b/setup.py @@ -122,8 +122,8 @@ def get_long_description(): project_urls={ "Source": "https://github.com/dstackai/dstack", }, - description="dstack is an open-source framework for orchestration GPU workloads and development of generative AI " - "models across multiple clouds.", + description="dstack is an open-source platform for training, fine-tuning, and deployment of generative AI models " + "across various cloud providers.", long_description=get_long_description(), long_description_content_type="text/markdown", python_requires=">=3.8", diff --git a/src/dstack/api/_public/huggingface/finetuning/sft/__init__.py b/src/dstack/api/_public/huggingface/finetuning/sft/__init__.py index 5bb93e483..a7cdd1fdf 100644 --- a/src/dstack/api/_public/huggingface/finetuning/sft/__init__.py +++ b/src/dstack/api/_public/huggingface/finetuning/sft/__init__.py @@ -23,7 +23,7 @@ class SFTFineTuningTask(TaskConfiguration): Args: model_name: The model that you want to train from the Hugging Face hub. E.g. gpt2, gpt2-xl, bert, etc. dataset_name: The instruction dataset to use. - new_model_name: The name under which to push the fine-tuned model to the Hugging Face Hub. + new_model_name: The name to use for pushing the fine-tuned model to the Hugging Face Hub. If unset, it defaults to the name of the run. report_to: Supported integrations include `"wandb"` and `"tensorboard"`. env: The list of environment variables, which defaults to those of the current process. It must include `"HUGGING_FACE_HUB_TOKEN"` and related variables required by the integration specified in @@ -60,8 +60,8 @@ def __init__( self, model_name: str, dataset_name: str, - new_model_name: str, env: Dict[str, str], + new_model_name: Optional[str] = None, report_to: Optional[str] = None, per_device_train_batch_size: int = 4, per_device_eval_batch_size: int = 4, @@ -128,13 +128,14 @@ def __init__( # TODO: Support more integrations # Validating environment variables _ = env["HUGGING_FACE_HUB_TOKEN"] + report_to_env = "" if report_to == "wandb": _ = env["WANDB_API_KEY"] - _ = env["WANDB_PROJECT"] + report_to_env += "WANDB_PROJECT=${WANDB_PROJECT:-$REPO_ID} WANDB_RUN_ID=$RUN_NAME" python_command = re.sub( " +", " ", - f"HF_HUB_ENABLE_HF_TRANSFER=1 python train.py --model_name {model_name} --new_model_name {new_model_name} --dataset_name {dataset_name} --merge_and_push {args}", + f"{report_to_env} HF_HUB_ENABLE_HF_TRANSFER=1 python train.py --model_name {model_name} --new_model_name {new_model_name or '$RUN_NAME'} --dataset_name {dataset_name} --merge_and_push {args}", ).strip() pip_install_command = "pip install -r requirements.txt" commands = [pip_install_command] @@ -156,4 +157,4 @@ def _get_arg(name, value: Any, default: Any) -> str: return "" def get_repo(self) -> SFTFineTuningTaskRepo: - return SFTFineTuningTaskRepo(repo_id="dstack.api._public.huggingface.finetuning") + return SFTFineTuningTaskRepo(repo_id="dstack.api._public.huggingface.finetuning.sft")