Skip to content

Commit

Permalink
Fix bug showing app command
Browse files Browse the repository at this point in the history
Signed-off-by: John Collier <[email protected]>
  • Loading branch information
johnmcollier committed Aug 7, 2024
1 parent b0e4c7a commit 76fa5d5
Show file tree
Hide file tree
Showing 20 changed files with 55 additions and 25 deletions.
Binary file added skeleton/techdoc/docs/.assets/open-terminal.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added skeleton/techdoc/docs/.assets/workbench-name.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
12 changes: 9 additions & 3 deletions skeleton/techdoc/docs/rhoai.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,19 +11,25 @@ This document will outline how you can run build and run your sample application
## Running the Sample

1) Navigate to the OpenShift AI workbench created for your sample application

2) Go to `File->Open` and select `Terminal`
![image](./.assets/open-terminal.png)

3) In the terminal, run `cd ${{ values.name }}` to navigate to your sample app's directory

4) Run `pip install --upgrade -r requirements.txt` to install the dependencies for your application
5) Run `${{ values.appRunCommand }}` to run the sample

5) Run `${{ values.appRunCommand }}` to run the sample in the workbench.

## Accessing the Sample

With the sample app now running, the following steps will allow you to access the sample app in your browser:

1) Navigate back to the OpenShift AI dashboard, and find the name of your workbench.
![image](./.assets/workbench-name.png)

2) In a terminal window on your machine, run `oc get pods -l app=<workbench-name>`. This will retrieve the name of the pod where the workbench is running.

3) Run `oc port-forward
3) Run `oc port-forward <pod-name> ${{ values.appPort }}` to port forward the sample application's port to your local machine.

4) Access `http://localhost:${{ values.appPort }}` in your browser to access the application
4) Finally, visit `http://localhost:${{ values.appPort }}` in your browser to access the application.
4 changes: 2 additions & 2 deletions skeleton/template.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -205,6 +205,7 @@ spec:
srcRepoURL: https://${{ parameters.githubServer if parameters.hostType === 'GitHub' else parameters.gitlabServer }}/${{ parameters.repoOwner }}/${{ parameters.repoName }}
appContainer: ${{ 'quay.io/redhat-ai-dev/ai-template-bootstrap-app:latest' if parameters.hostType === 'GitHub' else '${APP_INTERFACE_CONTAINER}' }}
appPort: ${APP_PORT}
appRunCommand: "${APP_RUN_COMMAND}"
modelServiceContainer: ${MODEL_SERVICE_CONTAINER}
modelServicePort: ${MODEL_SERVICE_PORT}
# Renders all the template variables into the files and directory names and content, and places the result in the workspace.
Expand All @@ -225,7 +226,7 @@ spec:
tags: 'sed.edit.APPTAGS'
owner: ${{ parameters.owner }}
repoSlug: '${{ parameters.imageOrg }}/${{ parameters.imageName }}'
defaultBranch: ${{ parameters.branch }}
defaultBranch: ${{ parameters.branch }}
- id: fetch-github-action
name: Fetch GitHub Action
action: fetch:plain
Expand Down Expand Up @@ -298,7 +299,6 @@ spec:
modelPath: "${MODEL_PATH}"
appContainer: ${{ 'quay.io/redhat-ai-dev/ai-template-bootstrap-app:latest' if parameters.hostType === 'GitHub' else '${APP_INTERFACE_CONTAINER}' }}
appPort: ${APP_PORT}
appRunCommand: ${APP_RUN_COMMAND}
modelServiceContainer: ${MODEL_SERVICE_CONTAINER}
modelServicePort: ${MODEL_SERVICE_PORT}
# SED_LLM_SERVER_START
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
12 changes: 9 additions & 3 deletions templates/audio-to-text/content/docs/rhoai.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,19 +11,25 @@ This document will outline how you can run build and run your sample application
## Running the Sample

1) Navigate to the OpenShift AI workbench created for your sample application

2) Go to `File->Open` and select `Terminal`
![image](./.assets/open-terminal.png)

3) In the terminal, run `cd ${{ values.name }}` to navigate to your sample app's directory

4) Run `pip install --upgrade -r requirements.txt` to install the dependencies for your application
5) Run `${{ values.appRunCommand }}` to run the sample

5) Run `${{ values.appRunCommand }}` to run the sample in the workbench.

## Accessing the Sample

With the sample app now running, the following steps will allow you to access the sample app in your browser:

1) Navigate back to the OpenShift AI dashboard, and find the name of your workbench.
![image](./.assets/workbench-name.png)

2) In a terminal window on your machine, run `oc get pods -l app=<workbench-name>`. This will retrieve the name of the pod where the workbench is running.

3) Run `oc port-forward
3) Run `oc port-forward <pod-name> ${{ values.appPort }}` to port forward the sample application's port to your local machine.

4) Access `http://localhost:${{ values.appPort }}` in your browser to access the application
4) Finally, visit `http://localhost:${{ values.appPort }}` in your browser to access the application.
4 changes: 2 additions & 2 deletions templates/audio-to-text/template.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -165,6 +165,7 @@ spec:
srcRepoURL: https://${{ parameters.githubServer if parameters.hostType === 'GitHub' else parameters.gitlabServer }}/${{ parameters.repoOwner }}/${{ parameters.repoName }}
appContainer: ${{ 'quay.io/redhat-ai-dev/ai-template-bootstrap-app:latest' if parameters.hostType === 'GitHub' else 'quay.io/redhat-ai-dev/audio-to-text:latest' }}
appPort: 8501
appRunCommand: "streamlit run whisper_client.py"
modelServiceContainer: quay.io/redhat-ai-dev/whispercpp:latest
modelServicePort: 8001
# Renders all the template variables into the files and directory names and content, and places the result in the workspace.
Expand All @@ -185,7 +186,7 @@ spec:
tags: '["ai", "whispercpp", "python", "asr"]'
owner: ${{ parameters.owner }}
repoSlug: '${{ parameters.imageOrg }}/${{ parameters.imageName }}'
defaultBranch: ${{ parameters.branch }}
defaultBranch: ${{ parameters.branch }}
- id: fetch-github-action
name: Fetch GitHub Action
action: fetch:plain
Expand Down Expand Up @@ -258,7 +259,6 @@ spec:
modelPath: "/model/model.file"
appContainer: ${{ 'quay.io/redhat-ai-dev/ai-template-bootstrap-app:latest' if parameters.hostType === 'GitHub' else 'quay.io/redhat-ai-dev/audio-to-text:latest' }}
appPort: 8501
appRunCommand: streamlit run whisper_client.py
modelServiceContainer: quay.io/redhat-ai-dev/whispercpp:latest
modelServicePort: 8001
existingModelServer: ${{ parameters.modelServer === 'Existing model server' }}
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
12 changes: 9 additions & 3 deletions templates/chatbot/content/docs/rhoai.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,19 +11,25 @@ This document will outline how you can run build and run your sample application
## Running the Sample

1) Navigate to the OpenShift AI workbench created for your sample application

2) Go to `File->Open` and select `Terminal`
![image](./.assets/open-terminal.png)

3) In the terminal, run `cd ${{ values.name }}` to navigate to your sample app's directory

4) Run `pip install --upgrade -r requirements.txt` to install the dependencies for your application
5) Run `${{ values.appRunCommand }}` to run the sample

5) Run `${{ values.appRunCommand }}` to run the sample in the workbench.

## Accessing the Sample

With the sample app now running, the following steps will allow you to access the sample app in your browser:

1) Navigate back to the OpenShift AI dashboard, and find the name of your workbench.
![image](./.assets/workbench-name.png)

2) In a terminal window on your machine, run `oc get pods -l app=<workbench-name>`. This will retrieve the name of the pod where the workbench is running.

3) Run `oc port-forward
3) Run `oc port-forward <pod-name> ${{ values.appPort }}` to port forward the sample application's port to your local machine.

4) Access `http://localhost:${{ values.appPort }}` in your browser to access the application
4) Finally, visit `http://localhost:${{ values.appPort }}` in your browser to access the application.
4 changes: 2 additions & 2 deletions templates/chatbot/template.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -179,6 +179,7 @@ spec:
srcRepoURL: https://${{ parameters.githubServer if parameters.hostType === 'GitHub' else parameters.gitlabServer }}/${{ parameters.repoOwner }}/${{ parameters.repoName }}
appContainer: ${{ 'quay.io/redhat-ai-dev/ai-template-bootstrap-app:latest' if parameters.hostType === 'GitHub' else 'quay.io/redhat-ai-dev/chatbot:latest' }}
appPort: 8501
appRunCommand: "streamlit run chatbot_ui.py"
modelServiceContainer: quay.io/ai-lab/llamacpp_python:latest
modelServicePort: 8001
# Renders all the template variables into the files and directory names and content, and places the result in the workspace.
Expand All @@ -199,7 +200,7 @@ spec:
tags: '["ai", "llamacpp", "vllm", "python"]'
owner: ${{ parameters.owner }}
repoSlug: '${{ parameters.imageOrg }}/${{ parameters.imageName }}'
defaultBranch: ${{ parameters.branch }}
defaultBranch: ${{ parameters.branch }}
- id: fetch-github-action
name: Fetch GitHub Action
action: fetch:plain
Expand Down Expand Up @@ -272,7 +273,6 @@ spec:
modelPath: "/model/model.file"
appContainer: ${{ 'quay.io/redhat-ai-dev/ai-template-bootstrap-app:latest' if parameters.hostType === 'GitHub' else 'quay.io/redhat-ai-dev/chatbot:latest' }}
appPort: 8501
appRunCommand: streamlit run chatbot_ui.py
modelServiceContainer: quay.io/ai-lab/llamacpp_python:latest
modelServicePort: 8001
# SED_LLM_SERVER_START
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
12 changes: 9 additions & 3 deletions templates/codegen/content/docs/rhoai.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,19 +11,25 @@ This document will outline how you can run build and run your sample application
## Running the Sample

1) Navigate to the OpenShift AI workbench created for your sample application

2) Go to `File->Open` and select `Terminal`
![image](./.assets/open-terminal.png)

3) In the terminal, run `cd ${{ values.name }}` to navigate to your sample app's directory

4) Run `pip install --upgrade -r requirements.txt` to install the dependencies for your application
5) Run `${{ values.appRunCommand }}` to run the sample

5) Run `${{ values.appRunCommand }}` to run the sample in the workbench.

## Accessing the Sample

With the sample app now running, the following steps will allow you to access the sample app in your browser:

1) Navigate back to the OpenShift AI dashboard, and find the name of your workbench.
![image](./.assets/workbench-name.png)

2) In a terminal window on your machine, run `oc get pods -l app=<workbench-name>`. This will retrieve the name of the pod where the workbench is running.

3) Run `oc port-forward
3) Run `oc port-forward <pod-name> ${{ values.appPort }}` to port forward the sample application's port to your local machine.

4) Access `http://localhost:${{ values.appPort }}` in your browser to access the application
4) Finally, visit `http://localhost:${{ values.appPort }}` in your browser to access the application.
4 changes: 2 additions & 2 deletions templates/codegen/template.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -179,6 +179,7 @@ spec:
srcRepoURL: https://${{ parameters.githubServer if parameters.hostType === 'GitHub' else parameters.gitlabServer }}/${{ parameters.repoOwner }}/${{ parameters.repoName }}
appContainer: ${{ 'quay.io/redhat-ai-dev/ai-template-bootstrap-app:latest' if parameters.hostType === 'GitHub' else 'quay.io/redhat-ai-dev/codegen:latest' }}
appPort: 8501
appRunCommand: "streamlit run codegen-app.py"
modelServiceContainer: quay.io/ai-lab/llamacpp_python:latest
modelServicePort: 8001
# Renders all the template variables into the files and directory names and content, and places the result in the workspace.
Expand All @@ -199,7 +200,7 @@ spec:
tags: '["ai", "llamacpp", "vllm", "python"]'
owner: ${{ parameters.owner }}
repoSlug: '${{ parameters.imageOrg }}/${{ parameters.imageName }}'
defaultBranch: ${{ parameters.branch }}
defaultBranch: ${{ parameters.branch }}
- id: fetch-github-action
name: Fetch GitHub Action
action: fetch:plain
Expand Down Expand Up @@ -272,7 +273,6 @@ spec:
modelPath: "/model/model.file"
appContainer: ${{ 'quay.io/redhat-ai-dev/ai-template-bootstrap-app:latest' if parameters.hostType === 'GitHub' else 'quay.io/redhat-ai-dev/codegen:latest' }}
appPort: 8501
appRunCommand: streamlit run codegen-app.py
modelServiceContainer: quay.io/ai-lab/llamacpp_python:latest
modelServicePort: 8001
# SED_LLM_SERVER_START
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
12 changes: 9 additions & 3 deletions templates/object-detection/content/docs/rhoai.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,19 +11,25 @@ This document will outline how you can run build and run your sample application
## Running the Sample

1) Navigate to the OpenShift AI workbench created for your sample application

2) Go to `File->Open` and select `Terminal`
![image](./.assets/open-terminal.png)

3) In the terminal, run `cd ${{ values.name }}` to navigate to your sample app's directory

4) Run `pip install --upgrade -r requirements.txt` to install the dependencies for your application
5) Run `${{ values.appRunCommand }}` to run the sample

5) Run `${{ values.appRunCommand }}` to run the sample in the workbench.

## Accessing the Sample

With the sample app now running, the following steps will allow you to access the sample app in your browser:

1) Navigate back to the OpenShift AI dashboard, and find the name of your workbench.
![image](./.assets/workbench-name.png)

2) In a terminal window on your machine, run `oc get pods -l app=<workbench-name>`. This will retrieve the name of the pod where the workbench is running.

3) Run `oc port-forward
3) Run `oc port-forward <pod-name> ${{ values.appPort }}` to port forward the sample application's port to your local machine.

4) Access `http://localhost:${{ values.appPort }}` in your browser to access the application
4) Finally, visit `http://localhost:${{ values.appPort }}` in your browser to access the application.
4 changes: 2 additions & 2 deletions templates/object-detection/template.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -165,6 +165,7 @@ spec:
srcRepoURL: https://${{ parameters.githubServer if parameters.hostType === 'GitHub' else parameters.gitlabServer }}/${{ parameters.repoOwner }}/${{ parameters.repoName }}
appContainer: ${{ 'quay.io/redhat-ai-dev/ai-template-bootstrap-app:latest' if parameters.hostType === 'GitHub' else 'quay.io/redhat-ai-dev/object_detection:latest' }}
appPort: 8501
appRunCommand: "streamlit run object_detection_client.py"
modelServiceContainer: quay.io/redhat-ai-dev/object_detection_python:latest
modelServicePort: 8000
# Renders all the template variables into the files and directory names and content, and places the result in the workspace.
Expand All @@ -185,7 +186,7 @@ spec:
tags: '["ai", "detr", "python"]'
owner: ${{ parameters.owner }}
repoSlug: '${{ parameters.imageOrg }}/${{ parameters.imageName }}'
defaultBranch: ${{ parameters.branch }}
defaultBranch: ${{ parameters.branch }}
- id: fetch-github-action
name: Fetch GitHub Action
action: fetch:plain
Expand Down Expand Up @@ -258,7 +259,6 @@ spec:
modelPath: "/model/detr-resnet-101"
appContainer: ${{ 'quay.io/redhat-ai-dev/ai-template-bootstrap-app:latest' if parameters.hostType === 'GitHub' else 'quay.io/redhat-ai-dev/object_detection:latest' }}
appPort: 8501
appRunCommand: streamlit run object_detection_client.py
modelServiceContainer: quay.io/redhat-ai-dev/object_detection_python:latest
modelServicePort: 8000
existingModelServer: ${{ parameters.modelServer === 'Existing model server' }}
Expand Down

0 comments on commit 76fa5d5

Please sign in to comment.