Skip to content

Latest commit

 

History

History
36 lines (28 loc) · 1.58 KB

File metadata and controls

36 lines (28 loc) · 1.58 KB

[WIP] End-to-End kubeflow tutorial using a Sequence-to-Sequence model

This example demonstrates how you can use kubeflow end-to-end to train and serve a Sequence-to-Sequence model on an existing kubernetes cluster. This tutorial is based upon @hamelsmu's article "How To Create Data Products That Are Magical Using Sequence-to-Sequence Models".

Goals

There are two primary goals for this tutorial:

  • End-to-End kubeflow example
  • End-to-End Sequence-to-Sequence model

By the end of this tutorial, you should learn how to:

  • Setup a Kubeflow cluster on an existing Kubernetes deployment
  • Spawn up a Jupyter Notebook on the cluster
  • Spawn up a shared-persistent storage across the cluster to store large datasets
  • Train a Sequence-to-Sequence model using TensorFlow on the cluster using GPUs
  • Serve the model using Seldon Core
  • Query the model from a simple front-end application

Steps:

  1. Setup a Kubeflow cluster
  2. Training the model. You can train the model either using Jupyter Notebook or using TFJob.
    1. Training the model using a Jupyter Notebook
    2. Training the model using TFJob
    3. Distributed Training using tensor2tensor and TFJob
  3. Serving the model
  4. Querying the model
  5. Teardown