Skip to content

Commit efd0ed6

Browse files
authored
Use dataset streaming, cleanup diagram (#497)
Use streaming to avoid saving entire Huggingface dataset to disk for large datasets. Updated diagram for clarity regarding client/server interaction. --------- Signed-off-by: Rishi Chandra <[email protected]>
1 parent 13ff674 commit efd0ed6

File tree

4 files changed

+25
-23
lines changed

4 files changed

+25
-23
lines changed

examples/ML+DL-Examples/Spark-DL/dl_inference/README.md

+13-13
Original file line numberDiff line numberDiff line change
@@ -39,22 +39,22 @@ In this simple case, the `predict_batch_fn` will use TensorFlow APIs to load the
3939

4040
#### Notebook List
4141

42-
Below is a full list of the notebooks with links to the examples they are based on. All notebooks have been saved with sample outputs for quick browsing.
42+
Below is a full list of the notebooks and their links. All notebooks have been saved with sample outputs for quick browsing.
4343

4444
| | Framework | Notebook Name | Description | Link
4545
| ------------- | ------------- | ------------- | ------------- | -------------
46-
| 1 | HuggingFace | DeepSeek-R1 | LLM batch inference using the DeepSeek-R1-Distill-Llama reasoning model to solve word problems. | [Link](https://huggingface.co/deepseek-ai/DeepSeek-R1)
47-
| 2 | HuggingFace | Qwen-2.5-7b | LLM batch inference using the Qwen-2.5-7b model for text summarization. | [Link](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct)
48-
| 3 | HuggingFace | Gemma-7b | LLM batch inference using the Google Gemma-7b model for code comprehension tasks. | [Link](https://huggingface.co/google/gemma-7b-it)
49-
| 4 | HuggingFace | Sentence Transformers | Sentence embeddings using SentenceTransformers in Torch. | [Link](https://huggingface.co/sentence-transformers)
50-
| 5+6 | HuggingFace | Conditional Generation | Sentence translation using the T5 text-to-text transformer (Torch and Tensorflow). | [Link](https://huggingface.co/docs/transformers/model_doc/t5#t5)
51-
| 7+8 | HuggingFace | Pipelines | Sentiment analysis using Huggingface pipelines (Torch and Tensorflow). | [Link](https://huggingface.co/docs/transformers/quicktour#pipeline-usage)
52-
| 9 | PyTorch | Image Classification | Training a model to predict clothing categories in FashionMNIST, and deploying with Torch-TensorRT accelerated inference. | [Link](https://pytorch.org/tutorials/beginner/basics/quickstart_tutorial.html)
53-
| 10 | PyTorch | Housing Regression | Training and deploying a model to predict housing prices in the California Housing Dataset, and deploying with Torch-TensorRT accelerated inference. | [Link](https://github.com/christianversloot/machine-learning-articles/blob/main/how-to-create-a-neural-network-for-regression-with-pytorch.md)
54-
| 11 | Tensorflow | Image Classification | Training and deploying a model to predict hand-written digits in MNIST. | [Link](https://github.com/tensorflow/docs/blob/master/site/en/tutorials/keras/save_and_load.ipynb)
55-
| 12 | Tensorflow | Keras Preprocessing | Training and deploying a model with preprocessing layers to predict likelihood of pet adoption in the PetFinder mini dataset. | [Link](https://github.com/tensorflow/docs/blob/master/site/en/tutorials/structured_data/preprocessing_layers.ipynb)
56-
| 13 | Tensorflow | Keras Resnet50 | Deploying ResNet-50 to perform flower recognition from flower images. | [Link](https://docs.databricks.com/en/_extras/notebooks/source/deep-learning/keras-metadata.html)
57-
| 14 | Tensorflow | Text Classification | Training and deploying a model to perform sentiment analysis on the IMDB dataset. | [Link](https://github.com/tensorflow/docs/blob/master/site/en/tutorials/keras/text_classification.ipynb)
46+
| 1 | HuggingFace | DeepSeek-R1 | LLM batch inference using the DeepSeek-R1-Distill-Llama reasoning model to solve word problems. | [Link](huggingface/deepseek-r1_torch.ipynb)
47+
| 2 | HuggingFace | Qwen-2.5-7b | LLM batch inference using the Qwen-2.5-7b model for text summarization. | [Link](huggingface/qwen-2.5-7b_torch.ipynb)
48+
| 3 | HuggingFace | Gemma-7b | LLM batch inference using the Google Gemma-7b model for code comprehension tasks. | [Link](huggingface/gemma-7b_torch.ipynb)
49+
| 4 | HuggingFace | Sentence Transformers | Sentence embeddings using SentenceTransformers in Torch. | [Link](huggingface/sentence_transformers_torch.ipynb)
50+
| 5+6 | HuggingFace | Conditional Generation | Sentence translation using the T5 text-to-text transformer (Torch and Tensorflow). | [Torch Link](huggingface/conditional_generation_torch.ipynb), [TF Link](huggingface/conditional_generation_tf.ipynb)
51+
| 7+8 | HuggingFace | Pipelines | Sentiment analysis using Huggingface pipelines (Torch and Tensorflow). | [Torch Link](huggingface/pipelines_torch.ipynb), [TF Link](huggingface/pipelines_tf.ipynb)
52+
| 9 | PyTorch | Image Classification | Training a model to predict clothing categories in FashionMNIST, and deploying with Torch-TensorRT accelerated inference. | [Link](pytorch/image_classification_torch.ipynb)
53+
| 10 | PyTorch | Housing Regression | Training and deploying a model to predict housing prices in the California Housing Dataset, and deploying with Torch-TensorRT accelerated inference. | [Link](pytorch/housing_regression_torch.ipynb)
54+
| 11 | Tensorflow | Image Classification | Training and deploying a model to predict hand-written digits in MNIST. | [Link](tensorflow/image_classification_tf.ipynb)
55+
| 12 | Tensorflow | Keras Preprocessing | Training and deploying a model with preprocessing layers to predict likelihood of pet adoption in the PetFinder mini dataset. | [Link](tensorflow/keras_preprocessing_tf.ipynb)
56+
| 13 | Tensorflow | Keras Resnet50 | Deploying ResNet-50 to perform flower recognition from flower images. | [Link](tensorflow/keras_resnet50_tf.ipynb)
57+
| 14 | Tensorflow | Text Classification | Training and deploying a model to perform sentiment analysis on the IMDB dataset. | [Link](tensorflow/text_classification_tf.ipynb)
5858

5959

6060
## Running Locally

examples/ML+DL-Examples/Spark-DL/dl_inference/huggingface/deepseek-r1_torch.ipynb

+6-5
Original file line numberDiff line numberDiff line change
@@ -6,11 +6,11 @@
66
"source": [
77
"<img src=\"http://developer.download.nvidia.com/notebooks/dlsw-notebooks/tensorrt_torchtrt_efficientnet/nvidia_logo.png\" width=\"90px\">\n",
88
"\n",
9-
"# PySpark LLM Inference: DeepSeek-R1\n",
9+
"# PySpark LLM Inference: DeepSeek-R1 Reasoning Q/A\n",
1010
"\n",
1111
"In this notebook, we demonstrate distributed batch inference with [DeepSeek-R1](https://github.com/deepseek-ai/DeepSeek-R1), using open weights on Huggingface.\n",
1212
"\n",
13-
"We use [DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) as demonstration. DeepSeek's distilled models are based on open-source LLMs (such as Llama/Qwen), and are fine-tuned using samples generated by DeepSeek-R1 to perform multi-step reasoning tasks.\n",
13+
"We use [DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) as demonstration. DeepSeek's distilled models are based on open-source LLMs (such as Llama/Qwen), and are fine-tuned using samples generated by DeepSeek-R1. We'll show how to use the model to reason through word problems.\n",
1414
"\n",
1515
"**Note:** Running this model on GPU with 16-bit precision requires **~18GB** of GPU RAM. Make sure your instances have sufficient GPU capacity."
1616
]
@@ -261,6 +261,7 @@
261261
"outputs": [],
262262
"source": [
263263
"import os\n",
264+
"import pandas as pd\n",
264265
"import datasets\n",
265266
"from datasets import load_dataset\n",
266267
"datasets.disable_progress_bars()"
@@ -330,7 +331,7 @@
330331
"source": [
331332
"#### Load DataFrame\n",
332333
"\n",
333-
"Load the Orca Math Word Problems dataset from Huggingface and store in a Spark Dataframe."
334+
"Load the first 500 samples of the [Orca Math Word Problems dataset](https://huggingface.co/datasets/microsoft/orca-math-word-problems-200k) from Huggingface and store in a Spark Dataframe."
334335
]
335336
},
336337
{
@@ -339,8 +340,8 @@
339340
"metadata": {},
340341
"outputs": [],
341342
"source": [
342-
"dataset = load_dataset(\"microsoft/orca-math-word-problems-200k\", split=\"train[:1%]\")\n",
343-
"dataset = dataset.to_pandas()[\"question\"]"
343+
"dataset = load_dataset(\"microsoft/orca-math-word-problems-200k\", split=\"train\", streaming=True)\n",
344+
"dataset = pd.Series([sample[\"question\"] for sample in dataset.take(500)])"
344345
]
345346
},
346347
{

examples/ML+DL-Examples/Spark-DL/dl_inference/huggingface/gemma-7b_torch.ipynb

+6-5
Original file line numberDiff line numberDiff line change
@@ -6,11 +6,11 @@
66
"source": [
77
"<img src=\"http://developer.download.nvidia.com/notebooks/dlsw-notebooks/tensorrt_torchtrt_efficientnet/nvidia_logo.png\" width=\"90px\">\n",
88
"\n",
9-
"# PySpark LLM Inference: Gemma-7b\n",
9+
"# PySpark LLM Inference: Gemma-7b Code Comprehension\n",
1010
"\n",
1111
"In this notebook, we demonstrate distributed inference with the Google [Gemma-7b-instruct](https://huggingface.co/google/gemma-7b-it) LLM, using open-weights on Huggingface.\n",
1212
"\n",
13-
"The Gemma-7b-instruct is an instruction-fine-tuned version of the Gemma-7b base model.\n",
13+
"The Gemma-7b-instruct is an instruction-fine-tuned version of the Gemma-7b base model. We'll show how to use the model to perform code comprehension tasks.\n",
1414
"\n",
1515
"**Note:** Running this model on GPU with 16-bit precision requires **~18 GB** of GPU RAM. Make sure your instances have sufficient GPU capacity."
1616
]
@@ -200,6 +200,7 @@
200200
"outputs": [],
201201
"source": [
202202
"import os\n",
203+
"import pandas as pd\n",
203204
"import datasets\n",
204205
"from datasets import load_dataset\n",
205206
"datasets.disable_progress_bars()"
@@ -269,7 +270,7 @@
269270
"source": [
270271
"#### Load DataFrame\n",
271272
"\n",
272-
"Load the code comprehension dataset from Huggingface and store in a Spark Dataframe."
273+
"Load the first 500 samples of the [Code Comprehension dataset](https://huggingface.co/datasets/imbue/code-comprehension) from Huggingface and store in a Spark Dataframe."
273274
]
274275
},
275276
{
@@ -278,8 +279,8 @@
278279
"metadata": {},
279280
"outputs": [],
280281
"source": [
281-
"dataset = load_dataset(\"imbue/code-comprehension\", split=\"train[:1%]\")\n",
282-
"dataset = dataset.to_pandas()[\"question\"]"
282+
"dataset = load_dataset(\"imbue/code-comprehension\", split=\"train\", streaming=True)\n",
283+
"dataset = pd.Series([sample[\"question\"] for sample in dataset.take(500)])"
283284
]
284285
},
285286
{
Loading

0 commit comments

Comments
 (0)