Skip to content

Latest commit

 

History

History
37 lines (28 loc) · 1.46 KB

README.md

File metadata and controls

37 lines (28 loc) · 1.46 KB

Hydrosphere + Kubeflow Pipelines

This repository shows how to orchestrate a machine learning workflow with Kubeflow and Hydrosphere Serving.

Prerequisites

Note: All components of Kubeflow by default will be installed into kubeflow namespace.

Run a Workflow

  1. Build and publish all stages of the workflow 01-07
  2. Adjust pipeline.py to point to your published images
  3. Compile the pipeline
    $ python pipeline.py pipeline.tar.gz && tar -xvf pipeline.tar.gz

This will create two files for you: pipeline.yaml and pipeline.tar.gz. You can use both of these files to start a pipeline execution.

  • (Recommended) Kubeflow Pipelines

    • UI
      1. Open Kubeflow UI and upload pipeline.yaml with Upload Workflow button
      2. Create an experiment and make a run using this pipeline
    • Shell
      1. Adjust client.py as needed: 1) specify compiled pipeline definition; 2) specify ml-pipeline endpoint; 3) specify experiment name.
      2. Execute python client.py
  • Argo Workflows

    1. Install argo
    2. Submit a workflow
      $ argo submit pipeline.yaml --watch