Skip to content

Commit

Permalink
Initial support for Ollama
Browse files Browse the repository at this point in the history
  • Loading branch information
arey committed Dec 31, 2024
1 parent b3493b0 commit e63d48c
Show file tree
Hide file tree
Showing 5 changed files with 123 additions and 29 deletions.
90 changes: 68 additions & 22 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,8 @@
## Understanding the Spring Petclinic LangChain4j application

A chatbot using **Generative AI** has been added to the famous Spring Petclinic application.
This version uses the **[LangChain4j project](https://docs.langchain4j.dev/)** and currently supports **OpenAI** or **Azure's OpenAI** as the **LLM provider**. This is a fork from the **[spring-petclinic-ai](https://github.com/spring-petclinic/spring-petclinic-ai)** based on Spring AI.
This version uses the **[LangChain4j project](https://docs.langchain4j.dev/)** and currently supports **OpenAI** or **Azure's OpenAI** or **Ollama** (partial) as the **LLM provider**.
This is a fork from the **[spring-petclinic-ai](https://github.com/spring-petclinic/spring-petclinic-ai)** based on Spring AI.

This sample demonstrates how to **easily integrate AI/LLM capabilities into a Java application using LangChain4j**.
This can be achieved thanks to:
Expand Down Expand Up @@ -43,27 +44,72 @@ Here are **some examples** of what you could ask:

![Screenshot of the chat dialog](docs/chat-dialog.png)

Spring Petclinic currently supports **OpenAI** or **Azure's OpenAI** as the LLM provider.
In order to start `spring-petlinic-langchain4j` perform the following steps:

1. Decide which provider you want to use. By default, the `langchain4j-open-ai-spring-boot-starter` dependency is enabled. You can change it to `langchain4j-azure-open-ai-spring-boot-starter`in either`pom.xml` or in `build.gradle`, depending on your build tool of choice.
2. Create an OpenAI API key or a Azure OpenAI resource in your Azure Portal.
Refer to the [OpenAI's quickstart](https://platform.openai.com/docs/quickstart) or [Azure's documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/) for further information on how to obtain these.
You only need to populate the provider you're using - either openai, or azure-openai.
If you don't have your own OpenAI API key, don't worry!
You can temporarily use the `demo` key, which OpenAI provides free of charge for demonstration purposes.
This `demo` key has a quota, is limited to the gpt-4o-mini model, and is intended solely for demonstration use.
3. Export your API keys and endpoint as environment variables:
* either OpenAI:
```bash
export OPENAI_API_KEY="your_api_key_here"
```
* or Azure OpenAI:
```bash
export AZURE_OPENAI_ENDPOINT="https://your_resource.openai.azure.com"
export AZURE_OPENAI_KEY="your_api_key_here"
```
4. Follow the [next section Run Petclinic locally](#run-petclinic-locally)
## Choosing the LLM provider

Spring Petclinic currently supports **OpenAI** or **Azure's OpenAI** or **Ollama** (partial support) as the LLM provider.
**OpenAI** is the **default**.

Please note that the Spring Petclinic is not fully functional with the `llama3.1` model.
See the issue [#10](https://github.com/spring-petclinic/spring-petclinic-langchain4j/issues/10 ) for more information.

### 1. Use the selected LangChain4j Spring Boot starter

Spring Petclinic supports both `Maven` and `Gradle` build tools.

#### Maven build

Switching between LLM is done using **Maven profiles**. Three Maven profiles are provided:
1. `openai` (default)
2. `azure-openai`
3. `ollama`

By default, thanks to the default `openai` profile, the `langchain4j-open-ai-spring-boot-starter` dependency is enabled.
You can change it to `langchain4j-azure-open-ai-spring-boot-starter` or `langchain4j-ollama-spring-boot-starter` by activating the corresponding profile.
```shell
./mvnw package -P azure-openai
```
`in either`pom.xml` or in `build.gradle`, depending on your build tool of choice.

#### Gradle build

Gradle users will need to comment or uncomment the appropriate `dev.langchain4j:langchain4j-<llm>>-spring-boot-starter` dependency
in the `build.gradle` file, depending on the LLM provider they want to use.


### 2. Setup your LLM provider

#### OpenAI

Create an OpenAI API key by following the [OpenAI's quickstart](https://platform.openai.com/docs/quickstart).
If you don't have your own OpenAI API key, don't worry!
You can temporarily use the `demo` key, which OpenAI provides free of charge for demonstration purposes.
This `demo` key has a quota, is limited to the gpt-4o-mini model, and is intended solely for demonstration use.

Export your OpenAI API key as environment variable:
```bash
export OPENAI_API_KEY="your_api_key_here"
```

#### Azure OpenAI

Create a Azure OpenAI resource in your Azure Portal.
Refer to the [Azure's documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/) for further information on how to obtain these.

Then export your API keys and endpoint as environment variables:
```bash
export AZURE_OPENAI_ENDPOINT="https://your_resource.openai.azure.com"
export AZURE_OPENAI_KEY="your_api_key_here"
```

#### Ollama

Download the Ollama client from the [Ollama website](https://ollama.com/).
Run the `llama3.1` model:
```shell
ollama run llama3.1
```
By default, the Ollama REST API starts on `http://localhost:11434`. This URL is used in the `application.properties` file.


## Run Petclinic locally

Expand Down
1 change: 1 addition & 0 deletions build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@ dependencies {
implementation "dev.langchain4j:langchain4j-spring-boot-starter:${langchain4jVersion}"
implementation "dev.langchain4j:langchain4j-open-ai-spring-boot-starter:${langchain4jVersion}"
// implementation "dev.langchain4j:langchain4j-azure-open-ai-spring-boot-starter:${langchain4jVersion}"
// implementation "dev.langchain4j:langchain4j-ollama-spring-boot-starter:${langchain4jVersion}"
implementation "dev.langchain4j:langchain4j-embeddings-all-minilm-l6-v2:${langchain4jVersion}"
// Workaround for AOT issue (https://github.com/spring-projects/spring-framework/pull/33949) -->
implementation 'io.projectreactor:reactor-core'
Expand Down
48 changes: 42 additions & 6 deletions pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -79,12 +79,13 @@
<artifactId>langchain4j-spring-boot-starter</artifactId>
<version>${langchain4j.version}</version>
</dependency>
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-open-ai-spring-boot-starter</artifactId>
<!-- <artifactId>langchain4j-azure-open-ai-spring-boot-starter</artifactId>-->
<version>${langchain4j.version}</version>
</dependency>
<!-- <dependency>-->
<!-- <groupId>dev.langchain4j</groupId>-->
<!-- <artifactId>langchain4j-ollama-spring-boot-starter</artifactId>-->
<!-- &lt;!&ndash;artifactId>langchain4j-open-ai-spring-boot-starter</artifactId&ndash;&gt;-->
<!--&lt;!&ndash; <artifactId>langchain4j-azure-open-ai-spring-boot-starter</artifactId>&ndash;&gt;-->
<!-- <version>${langchain4j.version}</version>-->
<!-- </dependency>-->
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-embeddings-all-minilm-l6-v2</artifactId>
Expand Down Expand Up @@ -481,5 +482,40 @@
</pluginManagement>
</build>
</profile>

<!-- Maven profile for various LangChain4j model integrations -->
<profile>
<id>openai</id>
<activation>
<activeByDefault>true</activeByDefault>
</activation>
<dependencies>
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-open-ai-spring-boot-starter</artifactId>
<version>${langchain4j.version}</version>
</dependency>
</dependencies>
</profile>
<profile>
<id>azure-openai</id>
<dependencies>
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-azure-open-ai-spring-boot-starter</artifactId>
<version>${langchain4j.version}</version>
</dependency>
</dependencies>
</profile>
<profile>
<id>ollama</id>
<dependencies>
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j-ollama-spring-boot-starter</artifactId>
<version>${langchain4j.version}</version>
</dependency>
</dependencies>
</profile>
</profiles>
</project>
11 changes: 11 additions & 0 deletions src/main/resources/application.properties
Original file line number Diff line number Diff line change
Expand Up @@ -51,3 +51,14 @@ langchain4j.open-ai.chat-model.model-name=gpt-4o-mini
langchain4j.open-ai.chat-model.log-requests=true
langchain4j.open-ai.chat-model.log-responses=true

# Ollama
# These parameters only apply when using the langchain4j-ollama-spring-boot-starter dependency
langchain4j.ollama.streaming-chat-model.base-url=http://localhost:11434
langchain4j.ollama.streaming-chat-model.model-name=llama3.1

langchain4j.ollama.streaming-chat-model.log-requests=true
langchain4j.ollama.streaming-chat-model.log-responses=true
langchain4j.ollama.chat-model.base-url=http://localhost:11434
langchain4j.ollama.chat-model.model-name=llama3.1
langchain4j.ollama.chat-model.log-requests=true
langchain4j.ollama.chat-model.log-responses=true
2 changes: 1 addition & 1 deletion src/main/resources/prompts/system.st
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ Your job is to answer questions about and to perform actions on the user's behal
veterinarians, owners, owners' pets and owners' visits.
If you need access to pet owners or pet types, list and locate them without asking the user.
You are required to answer in a professional manner. If you don't know the answer, politely inform the user,
and then ask a follow-up question to help clarify what they are asking.
and then ask a follow-up question to help clarify what they are asking. Don't display technical information such as ID.
If you do know the answer, provide the answer but do not provide any additional followup questions.
When dealing with vets, if the user is unsure about the returned results, explain that there may be additional data that was not returned.
Only if the user is asking about the total number of all vets, answer that there are a lot and ask for some additional criteria.
Expand Down

0 comments on commit e63d48c

Please sign in to comment.