diff --git a/cloudbank-v4/common/src/main/resources/common.yaml b/cloudbank-v4/common/src/main/resources/common.yaml index c5448313f..efef53877 100644 --- a/cloudbank-v4/common/src/main/resources/common.yaml +++ b/cloudbank-v4/common/src/main/resources/common.yaml @@ -51,7 +51,7 @@ management: enabled: true otlp: tracing: - endpoint: ${otel.exporter.otlp.endpoint} + endpoint: http://obaas-signoz-otel-collector.platform.svc.local:4318 observations: key-values: app: ${spring.application.name} diff --git a/docs-source/cloudbank/content/deploy-ide/get-code.md b/docs-source/cloudbank/content/deploy-ide/get-code.md index 45ef3f9ad..41dabd43f 100644 --- a/docs-source/cloudbank/content/deploy-ide/get-code.md +++ b/docs-source/cloudbank/content/deploy-ide/get-code.md @@ -12,12 +12,12 @@ Download a copy of the CloudBank sample application. Create a local clone of the CloudBank source repository using this command. ```shell - git https://github.com/oracle/microservices-datadriven.git + git clone --depth 1 https://github.com/oracle/microservices-datadriven.git --tags cbv4-1.3.1 --single-branch ``` > **Note**: If you do not have **git** installed on your machine, you can download a zip file of the source code from [GitHub](https://github.com/oracle/microservices-datadriven) and unzip it on your machine instead. - The source code for the CloudBank application will be in the `microservices-datadriven` directory you just created, in the `cloudbank-v32` subdirectory. + The source code for the CloudBank application will be in the `microservices-datadriven` directory you just created, in the `cloudbank-v4` subdirectory. ```shell cd microservices-datadriven/cloudbank-v4 diff --git a/docs-source/cloudbank/content/saga/_index.md b/docs-source/cloudbank/content/saga/_index.md index 88acfbb88..6a9b15000 100644 --- a/docs-source/cloudbank/content/saga/_index.md +++ b/docs-source/cloudbank/content/saga/_index.md @@ -4,7 +4,4 @@ title = "Manage Sagas" weight = 5 +++ -This module introduces the Saga pattern, a very important pattern that helps us -manage data consistency across microservices. We will explore the Long Running -Action specification, one implementation of the Saga pattern, and then build -a Transfer microservice that will manage funds transfers using a saga. \ No newline at end of file +This module introduces the Saga pattern, a very important pattern that helps us manage data consistency across microservices. We will explore the Long Running Action specification, one implementation of the Saga pattern, and then build a Transfer microservice that will manage funds transfers using a saga. diff --git a/docs-source/cloudbank/content/saga/intro.md b/docs-source/cloudbank/content/saga/intro.md index ff084523f..e61d54b02 100644 --- a/docs-source/cloudbank/content/saga/intro.md +++ b/docs-source/cloudbank/content/saga/intro.md @@ -6,8 +6,6 @@ weight = 1 This module walks you through implementing the [Saga pattern](https://microservices.io/patterns/data/saga.html) using a [Long Running Action](https://download.eclipse.org/microprofile/microprofile-lra-1.0-M1/microprofile-lra-spec.html) to manage transactions across microservices. -Watch this short introduction video to get an idea of what you will be building: [](youtube:gk4BMX-KuaY) - Estimated Time: 30 minutes Quick walk through on how to manage saga transactions across microservices. diff --git a/docs-source/cloudbank/content/springai/_index.md b/docs-source/cloudbank/content/springai/_index.md index a81c5db0b..b40fec378 100644 --- a/docs-source/cloudbank/content/springai/_index.md +++ b/docs-source/cloudbank/content/springai/_index.md @@ -4,6 +4,6 @@ title = "CloudBank AI Assistant" weight = 6 +++ -This modules introduces [Spring AI](https://github.com/spring-projects/spring-ai) and explores how it can be used to build a CloudBank AI Assistant (chatbot) that will allow users to interact with CloudBank using a chat-based interface. +This modules introduces [Spring AI](https://github.com/spring-projects/spring-ai) and explores how it can be used to build a CloudBank AI Assistant (chatbot) that will allow users to interact with CloudBank using a chat-based interface. **Coming Soon:** We will be updating this module to help you learn about Retrieval Augmented Generation, Vector Database and AI Agents. diff --git a/docs-source/cloudbank/content/springai/simple-chat.md b/docs-source/cloudbank/content/springai/simple-chat.md index 34d87be81..3228ea18e 100644 --- a/docs-source/cloudbank/content/springai/simple-chat.md +++ b/docs-source/cloudbank/content/springai/simple-chat.md @@ -10,7 +10,7 @@ In this module, you will learn how to build a simple chatbot using Spring AI and Oracle Backend for Microservices and AI provides an option during installation to provision a set of Kubernetes nodes with NVIDIA A10 GPUs that are suitable for running AI workloads. If you choose that option during installation, you may also specify how many nodes are provisioned. The GPU nodes will be in a separate Node Pool to the normal CPU nodes, which allows you to scale it independently of the CPU nodes. They are also labeled so that you can target appropriate workloads to them using node selectors and/or affinity rules. -To view a list of nodes in your cluster with a GPU, you can use this command: +To view a list of nodes in your cluster with a GPU, you can use this command: ```bash $ kubectl get nodes -l 'node.kubernetes.io/instance-type=VM.GPU.A10.1' @@ -40,34 +40,35 @@ To install Ollama on your GPU nodes, you can use the following commands: helm repo update ``` -1. Create a `ollama-values.yaml` file to configure how Ollama should be installed, including - which node(s) to run it on. Here is an example that will run Ollama on a GPU node - and will pull the `llama3` model. +1. Create a `ollama-values.yaml` file to configure how Ollama should be installed, including which node(s) to run it on. Here is an example that will run Ollama on a GPU node and will pull the `llama3` model. ```yaml ollama: - gpu: + gpu: enabled: true - type: 'nvidia' + type: nvidia number: 1 - models: + models: + pull: - llama3 nodeSelector: - node.kubernetes.io/instance-type: VM.GPU.A10.1 + node.kubernetes.io/instance-type: VM.GPU.A10.1 ``` - For more information on how to configure Ollama using the helm chart, refer to - [its documentation](https://artifacthub.io/packages/helm/ollama-helm/ollama). + For more information on how to configure Ollama using the helm chart, refer to [its documentation](https://artifacthub.io/packages/helm/ollama-helm/ollama). - > **Note:** If you are using an environment where no GPU is available, you can run this on a CPU by changing the `values.yaml` file to the following: + > **Note:** If you are using an environment where no GPU is available, you can run this on a CPU by changing the `ollama-values.yaml` file to the following: ```yaml - ollama: - gpu: - enabled: false - models: - - llama3 - ``` + ollama: + gpu: + enabled: false + type: amd + number: 1 + models: + pull: + - llama3 + ``` 1. Create a namespace to deploy Ollama in: @@ -80,23 +81,23 @@ To install Ollama on your GPU nodes, you can use the following commands: ```bash helm install ollama ollama-helm/ollama --namespace ollama --values ollama-values.yaml ``` + 1. You can verify the deployment with the following command: ```bash kubectl get pods -n ollama -w ``` - + When the pod has the status `Running` the deployment is completed. ```text NAME READY STATUS RESTARTS AGE ollama-659c88c6b8-kmdb9 0/1 Running 0 84s ``` - + ### Test your Ollama deployment -You can interact with Ollama using the provided command line tool, called `ollama`. -For example, to list the available models, use the `ollama ls` command: +You can interact with Ollama using the provided command line tool, called `ollama`. For example, to list the available models, use the `ollama ls` command: ```bash kubectl -n ollama exec svc/ollama -- ollama ls @@ -117,11 +118,9 @@ which provides a comprehensive platform for building enterprise-level applicatio ### Using LLMs hosted by Ollama in your Spring application -A Kubernetes service named 'ollama' with port 11434 will be created so that your -applications can talk to models hosted by Ollama. +A Kubernetes service named 'ollama' with port 11434 will be created so that your applications can talk to models hosted by Ollama. -Now, you will create a simple Spring AI application that uses Llama3 to -create a simple chatbot. +Now, you will create a simple Spring AI application that uses Llama3 to create a simple chatbot. > **Note:** The sample code used in this module is available [here](https://github.com/oracle/microservices-datadriven/tree/main/cloudbank-v4/chatbot). @@ -204,11 +203,11 @@ create a simple chatbot. ``` - Note that this is very similar to the Maven POM files you have created in previous modules. [Spring AI](https://github.com/spring-projects/spring-ai) is currently approaching its 1.0.0 release, so you need to enable access to the milestone and snapshot repositories to use it. You will see the `repositories` section in the POM file above does that. + Note that this is very similar to the Maven POM files you have created in previous modules. [Spring AI](https://github.com/spring-projects/spring-ai) is currently approaching its 1.0.0 release, so you need to enable access to the milestone and snapshot repositories to use it. You will see the `repositories` section in the POM file above does that. The `spring-ai-bom` was added in the `dependencyManagement` section to make it easy to select the correct versions of various dependencies. - Finally, a dependency for `spring-ai-ollama-spring-boot-starter` was added. This provides access to the Spring AI Ollama functionality and autoconfiguration. + Finally, a dependency for `spring-ai-ollama-spring-boot-starter` was added. This provides access to the Spring AI Ollama functionality and autoconfiguration. 1. Configure access to your Ollama deployment @@ -228,9 +227,7 @@ create a simple chatbot. model: llama3 ``` - Note that you are providing the URL to access the Ollama instance that you just - deployed in your cluster. You also need to tell Spring AI to enable chat and - which model to use. + Note that you are providing the URL to access the Ollama instance that you just deployed in your cluster. You also need to tell Spring AI to enable chat and which model to use. 1. Create the main Spring application class @@ -315,7 +312,7 @@ create a simple chatbot. 1. Build a JAR file for deployment - Run the following command to build the JAR file (it will also remove any earlier builds). + Run the following command to build the JAR file (it will also remove any earlier builds). ```shell $ mvn clean package @@ -394,9 +391,9 @@ create a simple chatbot. The simplest way to verify the application is to use a kubectl tunnel to access it. -1. Create a tunnel to access the application +1. Create a tunnel to access the application: - Start a tunnel using this command: + Start a tunnel using this command: ```bash kubectl -n application port-forward svc/chatbot 8080 & @@ -413,4 +410,5 @@ The simplest way to verify the application is to use a kubectl tunnel to access Spring Boot is an open-source Java-based framework that provides a simple and efficient way to build web applications, RESTful APIs, and microservices. It's built on top of the Spring Framework, but with a more streamlined and opinionated approach. ... ... - ``` \ No newline at end of file + ``` + \ No newline at end of file