Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add LLM batch inference examples #493

Merged
merged 17 commits into from
Feb 14, 2025
Merged

Conversation

rishic3
Copy link
Collaborator

@rishic3 rishic3 commented Feb 10, 2025

Add deepseek-r1 and gemma-7b LLM batch inference notebooks.
Updated CSP instructions since these notebooks require >20GB GPU RAM (A10/L4).

@rishic3 rishic3 marked this pull request as ready for review February 11, 2025 01:43
@rishic3 rishic3 requested a review from eordentlich February 11, 2025 01:44
@rishic3
Copy link
Collaborator Author

rishic3 commented Feb 11, 2025

@leewyang If you get the chance, welcoming suggestions on these initial examples + future extensions 🙂

Copy link
Collaborator

@eordentlich eordentlich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall looks great. A few comments, questions.

@@ -34,22 +34,26 @@
databricks workspace import $INIT_DEST --format AUTO --file $INIT_SRC
```

6. Launch the cluster with the provided script (note that the script specifies **Azure instances** by default; change as needed):
6. Launch the cluster with the provided script.
**Note:** The LLM examples (e.g. deepseek-r1, gemma-7b) require greater GPU RAM (>18GB). For these notebooks, we recommend modifying the startup script node types to use A10 GPU instances. Note that the script specifies **Azure instances** by default; change as needed.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We might want to just recommend a10 across the board.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@@ -50,7 +50,12 @@
```shell
export FRAMEWORK=torch
```
Run the cluster startup script. The script will also retrieve and use the [spark-rapids initialization script](https://github.com/GoogleCloudDataproc/initialization-actions/blob/master/spark-rapids/spark-rapids.sh) to setup GPU resources.
Run the cluster startup script. The script will also retrieve and use the [spark-rapids initialization script](https://github.com/GoogleCloudDataproc/initialization-actions/blob/master/spark-rapids/spark-rapids.sh) to setup GPU resources.
**Note:** The LLM examples (e.g. deepseek-r1, gemma-7b) require greater GPU RAM (>18GB). For these notebooks, setting the following environment variable will tell the startup script to use L4 GPUs instead of the default T4 GPUs.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

L4's across the board.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

"- Wrap the Triton inference function in a predict_batch_udf to launch parallel inference requests using Spark.\n",
"- Finally, distribute a shutdown signal to terminate the Triton server processes on each node.\n",
"\n",
"<img src=\"../images/spark-pytriton.png\" alt=\"drawing\" width=\"700\"/>"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I look at this figure more closely, it is a little confusing with only 2 executors running the start triton and then 4 executors running the prediction tasks. Should executors be constant with possibly multiple inference tasks running in parallel within an executor? Also worker <-> node? And referencing tasks on the driver is a little confusing too.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

" print(f\"Connecting to Triton model {model_name} at {url}.\")\n",
"\n",
" def infer_batch(inputs):\n",
" with ModelClient(url, model_name, inference_timeout_s=500) as client:\n",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has appeared throughout these PRs, but does this mean a new connection is created to triton for each batch of data? I guess overhead isn't that much.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, there is a small overhead - the client will send a request to the server for model configuration (shapes, types, etc.) with every new connection. Essentially just a local ping for a pbtxt, no compute, so I doubt it's significant.

That said if we create the client outside the predict function, predict_batch_udf can cache the client on the executor side. We would just need some way to gracefully close it on shutdown.

},
"node_type_id": "Standard_NC8as_T4_v3",
"driver_node_type_id": "Standard_NC8as_T4_v3",
"node_type_id": "Standard_NV12ads_A10_v5",
Copy link
Collaborator Author

@rishic3 rishic3 Feb 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if this is a "popular" node to have on Azure databricks. Would be good to know what's most commonly used A10 instance and we could have that as the default.

Copy link
Collaborator

@eordentlich eordentlich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@rishic3 rishic3 merged commit 1bc43fb into NVIDIA:branch-25.02 Feb 14, 2025
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants