Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add documentation on installing custom images #67

Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 15 additions & 1 deletion Quick-Start.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ Total:
Ray:
CPU: 100m
Memory: 512Mi
MCAD
MCAD:
cpu: 2000m
memory: 2Gi
InstaScale:
Expand Down Expand Up @@ -78,6 +78,20 @@ At this point you should be able to go to your notebook spawner page and select

You can access the spawner page through the Open Data Hub dashboard. The default route should be `https://odh-dashboard-<your ODH namespace>.apps.<your cluster's uri>`. Once you are on your dashboard, you can select "Launch application" on the Jupyter application. This will take you to your notebook spawner page.

### [Optional] - Install custom images of MCAD and/or InstaScale
After adding the MCAD and InstaScale objects, the simplest way to update their versions is by replacing the `controllerImage` value in their Custom Resource Definitions (CRDs) with the path to your custom image. This image might be stored in a registry, such as quay.io. To help you build and push your custom image to a registry, both the [MCAD](https://github.com/project-codeflare/multi-cluster-app-dispatcher) and [InstaScale](https://github.com/project-codeflare/instascale) repositories provide `make` commands to achieve this.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These commands would be overwritten by the kfdef as the operator would reconcile the changes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made a mistake there, I meant to say to replace the value in the InstaScale CR as opposed to the CRD.

Wouldn't adding the controllerImage field and specifying the value in the InstaScale CR, set a new desired state? What do you think?

controllerImage in CRD

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the issue is that the odh operator is managing the instascale CR. Any changes you make to the instascale cr will be overwritten by the ODH operator. Can you sync up with Dimitri tomorrow? He should be able to explain in better detail (or I can once I'm on)


Replacing the value of the `controllerImage` may be done manually or by running the following command:

#### OpenShift
```
oc patch instascale instascale -n opendatahub --type='json' -p='[{"op": "replace", "path": "/spec/controllerImage", "value": "<PathToYourCustomImage>"}]'
```

#### Kubernetes
```
kubectl patch instascale instascale -n opendatahub --type='json' -p='[{"op": "replace", "path": "/spec/controllerImage", "value": "<PathToYourCustomImage>"}]'
```

### Using an Openshift Dedicated or ROSA Cluster
If you are using an Openshift Dedicated or ROSA Cluster you will need to create a secret in the opendatahub namespace containing your ocm token. You can find your token [here](https://console.redhat.com/openshift/token). Navigate to Workloads -> secrets in the Openshift Console. Click Create and choose a key/value secret. Secret name: instascale-ocm-secret, Key: token, Value: < ocm token > and click create.
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Artifacts for installing the Distributed Workloads stack as part of ODH

Distributed Workloads is a simple, user-friendly abstraction for scaling,
queuing and resource management of distributed AI/ML and Python workloads.
It consists of three components:
It consists of four components:

* [CodeFlare SDK](https://github.com/project-codeflare/codeflare-sdk) to define and control remote distributed compute jobs and infrastructure with any Python based environment
* [Multi-Cluster Application Dispatcher (MCAD)](https://github.com/project-codeflare/multi-cluster-app-dispatcher) for management of batch jobs
Expand Down