diff --git a/README.md b/README.md index 53cdd4a8..757197d7 100644 --- a/README.md +++ b/README.md @@ -6,7 +6,7 @@ An initial introduction to the Iguazio Data Science Platform and the platform tu - [Data Science Workflow](#data-science-workflow) - [The Tutorial Notebooks](#the-tutorial-notebooks) - [Getting-Started Tutorial](#getting-started-tutorial) -- [End-to-End Use-Case Application and How-To Demos](#end-to-end-use-case-applications) +- [End-to-End Use-Case Application and How-To Demos](#demos) - [Installing and Updating the MLRun Python Package](#mlrun-python-pkg-install-n-update) - [Data Ingestion and Preparation](#data-ingestion-and-preparation) - [Additional Platform Resources](#platform-resources) @@ -53,15 +53,16 @@ The home directory of the platform's running-user directory (**/User/<running > - The **welcome.ipynb** notebook and main **README.md** file provide the same introduction in different formats. + ## Getting-Started Tutorial -Start out by running the getting-started tutorial to familiarize yourself with the platform and experience firsthand some of its main capabilities.
-
+Start out by running the getting-started tutorial to familiarize yourself with the platform and experience firsthand some of its main capabilities. + View tutorial -You can also view the tutorial on [**GitHub**](https://github.com/mlrun/demos/blob/release/v0.6.x-latest/getting-started-tutorial/tutorial-1-MLRun-basics.ipynb) +You can also view the tutorial on [GitHub](https://github.com/mlrun/demos/blob/release/v0.6.x-latest/getting-started-tutorial/tutorial-1-MLRun-basics.ipynb). - + ## End-to-End Use-Case Application and How-To Demos @@ -85,6 +86,8 @@ For full usage instructions, run the script with the `-h` or `--help` flag: !/User/update-demos.sh --help ``` + + ### End-to-End Use-Case Application Demos @@ -95,7 +98,7 @@ For full usage instructions, run the script with the `-h` or `--help` flag: - + @@ -107,7 +110,7 @@ For full usage instructions, run the script with the `-h` or `--help` flag: - + @@ -119,7 +122,7 @@ For full usage instructions, run the script with the `-h` or `--help` flag: - + @@ -130,7 +133,7 @@ For full usage instructions, run the script with the `-h` or `--help` flag: - + @@ -153,7 +156,7 @@ For full usage instructions, run the script with the `-h` or `--help` flag: - + @@ -164,9 +167,11 @@ For full usage instructions, run the script with the `-h` or `--help` flag: The demo implements both model training and inference, including model monitoring and concept-drift detection. -
Description
scikit-learn Demo: Full AutoML Pipelinescikit-learn Demo: Full AutoML pipeline
Open locally
Image Classification with Distributed Training DemoImage-Classification Demo: Image classification with distributed training
Open locally
Faces Demo: Real-Time Image Recognition with Deep LearningFaces Demo: Real-time image recognition with deep learning
Open locally
Churn Demo: Real-Time Customer-Churn PredictionChurn Demo: Real-time customer-churn prediction
Open locally
NetOps Demo: Predictive Network Operations / TelemetryNetOps Demo: Predictive network operations / telemetry
Open locally
+ + + -### How-to Demos +### How-To Demos @@ -176,7 +181,7 @@ For full usage instructions, run the script with the `-h` or `--help` flag: - + @@ -188,36 +193,36 @@ For full usage instructions, run the script with the `-h` or `--help` flag: - + - - + - - + -
Description
How-To: Converting Existing ML Code to an MLRun ProjectHow-To: Converting existing ML code to an MLRun project
Open locally
How-To: Run a Spark job to read CSV fileHow-To: Running a Spark job for reading a CSV file
Open locally

View on GitHub
Run a Spark job which reads a csv file and logs the dataset to MLRun database. + Demonstrates how to run a Spark job that reads a CSV file and logs the data set to an MLRun database.
How-To: Run a Spark Job to analyze dataHow-To: Running a Spark job for analyzing data
Open locally

View on GitHub
Create and run a Spark job which generates profile report from an Apache Spark DataFrame (based on pandas_profiling). + Demonstrates how to create and run a Spark job that generates a profile report from an Apache Spark DataFrame based on pandas profiling.
How-To: Spark Job with Spark OperatorHow-To: Running a Spark Job with Spark Operator
Open locally

View on GitHub
Demonstrates how to use spark operator for running a Spark job over Kubernetes. + Demonstrates how to use Spark Operator to run a Spark job over Kubernetes with MLRun.
diff --git a/platform-overview.ipynb b/platform-overview.ipynb index e2fbdcfb..c2d70681 100644 --- a/platform-overview.ipynb +++ b/platform-overview.ipynb @@ -99,7 +99,7 @@ "\n", "You can develop and test data science models in the platform's Jupyter Notebook service or in your preferred external editor.\n", "When your model is ready, you can train it in Jupyter Notebook or by using scalable cluster resources such as Nuclio functions, Dask, Spark ML, or Kubernetes jobs.\n", - "You can find model-training examples in the following platform demos; for more information and download instructions, see [**welcome.ipynb**](welcome.ipynb#end-to-end-use-case-applications) (notebook) or [**README.md**](README.md#end-to-end-use-case-applications) (Markdown):\n", + "You can find model-training examples in the following platform demos; for more information and download instructions, see [**welcome.ipynb**](welcome.ipynb#demos) (notebook) or [**README.md**](README.md#demos) (Markdown):\n", "\n", "- The NetOps demo demonstrates predictive infrastructure-monitoring using scikit-learn.\n", "- The image-classification demo demonstrates image recognition using TensorFlow and Horovod with MLRun.\n", @@ -118,7 +118,7 @@ "Data scientists need a simple way to track and view current and historical experiments along with the metadata that is associated with each experiment. This capability is critical for comparing different runs, and eventually helps to determine the best model and configuration for production deployment.\n", "\n", "The platform leverages the open-source [MLRun](https://github.com/mlrun/mlrun) library to help tackle these challenges. You can find examples of using MLRun in the [MLRun demos](https://github.com/mlrun/demos/).\n", - "For information about retrieving and updating local copies of the MLRun demos, see [**welcome.ipynb**](welcome.ipynb#end-to-end-use-case-applications) (notebook) or [**README.md**](README.md#end-to-end-use-case-applications) (Markdown)." + "For information about retrieving and updating local copies of the MLRun demos, see [**welcome.ipynb**](welcome.ipynb#demos) (notebook) or [**README.md**](README.md#demos) (Markdown)." ] }, { @@ -185,4 +185,4 @@ }, "nbformat": 4, "nbformat_minor": 4 -} \ No newline at end of file +} diff --git a/welcome.ipynb b/welcome.ipynb index 2397331c..ad3602b6 100644 --- a/welcome.ipynb +++ b/welcome.ipynb @@ -17,7 +17,7 @@ "- [Data Science Workflow](#data-science-workflow)\n", "- [The Tutorial Notebooks](#the-tutorial-notebooks)\n", "- [Getting-Started Tutorial](#getting-started-tutorial)\n", - "- [End-to-End Use-Case Application and How-To Demos](#end-to-end-use-case-applications)\n", + "- [End-to-End Use-Case Application and How-To Demos](#demos)\n", "- [Installing and Updating the MLRun Python Package](#mlrun-python-pkg-install-n-update)\n", "- [Data Ingestion and Preparation](#data-ingestion-and-preparation)\n", "- [Additional Platform Resources](#platform-resources)\n", @@ -98,21 +98,27 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "\n", + "" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ "## Getting-Started Tutorial\n", "\n", - "Start out by running the getting-started tutorial to familiarize yourself with the platform and experience firsthand some of its main capabilities.
\n", - "
\n", + "Start out by running the getting-started tutorial to familiarize yourself with the platform and experience firsthand some of its main capabilities.\n", + "\n", "\"View\n", "\n", - "You can also view the tutorial on [**GitHub**](https://github.com/mlrun/demos/blob/release/v0.6.x-latest/getting-started-tutorial/tutorial-1-MLRun-basics.ipynb)" + "You can also view the tutorial on [GitHub](https://github.com/mlrun/demos/blob/release/v0.6.x-latest/getting-started-tutorial/tutorial-1-MLRun-basics.ipynb)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "" + "" ] }, { @@ -155,6 +161,13 @@ "!/User/update-demos.sh --help" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "" + ] + }, { "cell_type": "markdown", "metadata": {}, @@ -174,7 +187,7 @@ " Description\n", " \n", " \n", - " scikit-learn Demo: Full AutoML Pipeline\n", + " scikit-learn Demo: Full AutoML pipeline\n", " \n", "
Open locally
\n", " \n", @@ -186,7 +199,7 @@ " \n", " \n", " \n", - " Image Classification with Distributed Training Demo\n", + " Image-Classification Demo: Image classification with distributed training\n", " \n", "
Open locally
\n", " \n", @@ -198,7 +211,7 @@ " \n", " \n", " \n", - " Faces Demo: Real-Time Image Recognition with Deep Learning\n", + " Faces Demo: Real-time image recognition with deep learning\n", " \n", "
Open locally
\n", " \n", @@ -209,7 +222,7 @@ " \n", " \n", " \n", - " Churn Demo: Real-Time Customer-Churn Prediction\n", + " Churn Demo: Real-time customer-churn prediction\n", " \n", "
Open locally
\n", " \n", @@ -232,7 +245,7 @@ " \n", " \n", " \n", - " NetOps Demo: Predictive Network Operations / Telemetry\n", + " NetOps Demo: Predictive network operations / telemetry\n", " \n", "
Open locally
\n", " \n", @@ -243,14 +256,21 @@ " The demo implements both model training and inference, including model monitoring and concept-drift detection.\n", " \n", " \n", - " " + "" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "### How-to Demos" + "### How-To Demos" ] }, { @@ -265,7 +285,7 @@ " Description\n", " \n", " \n", - " How-To: Converting Existing ML Code to an MLRun Project\n", + " How-To: Converting existing ML code to an MLRun project\n", " \n", "
Open locally
\n", " \n", @@ -277,36 +297,36 @@ " \n", " \n", " \n", - " How-To: Run a Spark job to read CSV file\n", + " How-To: Running a Spark job for reading a CSV file\n", " \n", "
Open locally
\n", " \n", " \n", "
View on GitHub
\n", " \n", - " Run a Spark job which reads a csv file and logs the dataset to MLRun database.\n", + " Demonstrates how to run a Spark job that reads a CSV file and logs the data set to an MLRun database.\n", " \n", " \n", " \n", - " How-To: Run a Spark Job to analyze data\n", + " How-To: Running a Spark job for analyzing data\n", " \n", "
Open locally
\n", " \n", " \n", "
View on GitHub
\n", " \n", - " Create and run a Spark job which generates profile report from an Apache Spark DataFrame (based on pandas_profiling).\n", + " Demonstrates how to create and run a Spark job that generates a profile report from an Apache Spark DataFrame based on pandas profiling.\n", " \n", " \n", " \n", - " How-To: Spark Job with Spark Operator\n", + " How-To: Running a Spark Job with Spark Operator\n", " \n", "
Open locally
\n", " \n", " \n", "
View on GitHub
\n", " \n", - " Demonstrates how to use spark operator for running a Spark job over Kubernetes.\n", + " Demonstrates how to use Spark Operator to run a Spark job over Kubernetes with MLRun.\n", " \n", " \n", ""