Skip to content

Commit

Permalink
Update LLM-Connections.md (#1039)
Browse files Browse the repository at this point in the history
* Minor fixes and updates

* minor fixes across docs

* Updated LLM-Connections.md

---------

Co-authored-by: theCyberTech <[email protected]>
  • Loading branch information
theCyberTech and theCyberTech authored Aug 2, 2024
1 parent 09f9212 commit 4a7ae8d
Showing 1 changed file with 56 additions and 82 deletions.
138 changes: 56 additions & 82 deletions docs/how-to/LLM-Connections.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,33 +6,25 @@ description: Comprehensive guide on integrating CrewAI with various Large Langua
## Connect CrewAI to LLMs

!!! note "Default LLM"
By default, CrewAI uses OpenAI's GPT-4 model (specifically, the model specified by the OPENAI_MODEL_NAME environment variable, defaulting to "gpt-4o") for language processing. You can configure your agents to use a different model or API as described in this guide.

CrewAI offers flexibility in connecting to various LLMs, including local models via [Ollama](https://ollama.ai) and different APIs like Azure. It's compatible with all [LangChain LLM](https://python.langchain.com/docs/integrations/llms/) components, enabling diverse integrations for tailored AI solutions.

## CrewAI Agent Overview

The `Agent` class is the cornerstone for implementing AI solutions in CrewAI. Here's a comprehensive overview of the Agent class attributes and methods:

- **Attributes**:
- `role`: Defines the agent's role within the solution.
- `goal`: Specifies the agent's objective.
- `backstory`: Provides a background story to the agent.
- `cache` *Optional*: Determines whether the agent should use a cache for tool usage. Default is `True`.
- `max_rpm` *Optional*: Maximum number of requests per minute the agent's execution should respect. Optional.
- `verbose` *Optional*: Enables detailed logging of the agent's execution. Default is `False`.
- `allow_delegation` *Optional*: Allows the agent to delegate tasks to other agents, default is `True`.
- `tools`: Specifies the tools available to the agent for task execution. Optional.
- `max_iter` *Optional*: Maximum number of iterations for an agent to execute a task, default is 25.
- `max_execution_time` *Optional*: Maximum execution time for an agent to execute a task. Optional.
- `step_callback` *Optional*: Provides a callback function to be executed after each step. Optional.
- `llm` *Optional*: Indicates the Large Language Model the agent uses. By default, it uses the GPT-4 model defined in the environment variable "OPENAI_MODEL_NAME".
- `function_calling_llm` *Optional* : Will turn the ReAct CrewAI agent into a function-calling agent.
- `callbacks` *Optional*: A list of callback functions from the LangChain library that are triggered during the agent's execution process.
- `system_template` *Optional*: Optional string to define the system format for the agent.
- `prompt_template` *Optional*: Optional string to define the prompt format for the agent.
- `response_template` *Optional*: Optional string to define the response format for the agent.
By default, CrewAI uses OpenAI's GPT-4o model (specifically, the model specified by the OPENAI_MODEL_NAME environment variable, defaulting to "gpt-4o") for language processing. You can configure your agents to use a different model or API as described in this guide.
By default, CrewAI uses OpenAI's GPT-4 model (specifically, the model specified by the OPENAI_MODEL_NAME environment variable, defaulting to "gpt-4") for language processing. You can configure your agents to use a different model or API as described in this guide.

CrewAI provides extensive versatility in integrating with various Language Models (LLMs), including local options through Ollama such as Llama and Mixtral to cloud-based solutions like Azure. Its compatibility extends to all [LangChain LLM components](https://python.langchain.com/v0.2/docs/integrations/llms/), offering a wide range of integration possibilities for customized AI applications.

The platform supports connections to an array of Generative AI models, including:

- OpenAI's suite of advanced language models
- Anthropic's cutting-edge AI offerings
- Ollama's diverse range of locally-hosted generative model & embeddings
- LM Studio's diverse range of locally hosted generative models & embeddings
- Groq's Super Fast LLM offerings
- Azures' generative AI offerings
- HuggingFace's generative AI offerings

This broad spectrum of LLM options enables users to select the most suitable model for their specific needs, whether prioritizing local deployment, specialized capabilities, or cloud-based scalability.

## Changing the default LLM
The default LLM is provided through the `langchain openai` package, which is installed by default when you install CrewAI. You can change this default LLM to a different model or API by setting the `OPENAI_MODEL_NAME` environment variable. This straightforward process allows you to harness the power of different OpenAI models, enhancing the flexibility and capabilities of your CrewAI implementation.
```python
# Required
os.environ["OPENAI_MODEL_NAME"]="gpt-4-0125-preview"
Expand All @@ -45,30 +37,27 @@ example_agent = Agent(
verbose=True
)
```
## Ollama Local Integration
Ollama is preferred for local LLM integration, offering customization and privacy benefits. To integrate Ollama with CrewAI, you will need the `langchain-ollama` package. You can then set the following environment variables to connect to your Ollama instance running locally on port 11434.

## Ollama Integration
Ollama is preferred for local LLM integration, offering customization and privacy benefits. To integrate Ollama with CrewAI, set the appropriate environment variables as shown below.

### Setting Up Ollama
- **Environment Variables Configuration**: To integrate Ollama, set the following environment variables:
```sh
OPENAI_API_BASE='http://localhost:11434'
OPENAI_MODEL_NAME='llama2' # Adjust based on available model
OPENAI_API_KEY=''
os.environ[OPENAI_API_BASE]='http://localhost:11434'
os.environ[OPENAI_MODEL_NAME]='llama2' # Adjust based on available model
os.environ[OPENAI_API_KEY]='' # No API Key required for Ollama
```

## Ollama Integration (ex. for using Llama 2 locally)
1. [Download Ollama](https://ollama.com/download).
2. After setting up the Ollama, Pull the Llama2 by typing following lines into the terminal ```ollama pull llama2```.
3. Enjoy your free Llama2 model that powered up by excellent agents from crewai.
## Ollama Integration Step by Step (ex. for using Llama 3.1 8B locally)
1. [Download and install Ollama](https://ollama.com/download).
2. After setting up the Ollama, Pull the Llama3.1 8B model by typing following lines into your terminal ```ollama run llama3.1```.
3. Llama3.1 should now be served locally on `http://localhost:11434`
```
from crewai import Agent, Task, Crew
from langchain.llms import Ollama
from langchain_ollama import ChatOllama
import os
os.environ["OPENAI_API_KEY"] = "NA"
llm = Ollama(
model = "llama2",
model = "llama3.1",
base_url = "http://localhost:11434")
general_agent = Agent(role = "Math Professor",
Expand Down Expand Up @@ -98,13 +87,14 @@ There are a couple of different ways you can use HuggingFace to host your LLM.

### Your own HuggingFace endpoint
```python
from langchain_community.llms import HuggingFaceEndpoint
from langchain_huggingface import HuggingFaceEndpoint,

llm = HuggingFaceEndpoint(
endpoint_url="<YOUR_ENDPOINT_URL_HERE>",
huggingfacehub_api_token="<HF_TOKEN_HERE>",
repo_id="microsoft/Phi-3-mini-4k-instruct",
task="text-generation",
max_new_tokens=512
max_new_tokens=512,
do_sample=False,
repetition_penalty=1.03,
)

agent = Agent(
Expand All @@ -115,66 +105,50 @@ agent = Agent(
)
```

### From HuggingFaceHub endpoint
```python
from langchain_community.llms import HuggingFaceHub

llm = HuggingFaceHub(
repo_id="HuggingFaceH4/zephyr-7b-beta",
huggingfacehub_api_token="<HF_TOKEN_HERE>",
task="text-generation",
)
```

## OpenAI Compatible API Endpoints
Switch between APIs and models seamlessly using environment variables, supporting platforms like FastChat, LM Studio, Groq, and Mistral AI.

### Configuration Examples
#### FastChat
```sh
OPENAI_API_BASE="http://localhost:8001/v1"
OPENAI_MODEL_NAME='oh-2.5m7b-q51'
OPENAI_API_KEY=NA
os.environ[OPENAI_API_BASE]="http://localhost:8001/v1"
os.environ[OPENAI_MODEL_NAME]='oh-2.5m7b-q51'
os.environ[OPENAI_API_KEY]=NA
```

#### LM Studio
Launch [LM Studio](https://lmstudio.ai) and go to the Server tab. Then select a model from the dropdown menu and wait for it to load. Once it's loaded, click the green Start Server button and use the URL, port, and API key that's shown (you can modify them). Below is an example of the default settings as of LM Studio 0.2.19:
```sh
OPENAI_API_BASE="http://localhost:1234/v1"
OPENAI_API_KEY="lm-studio"
os.environ[OPENAI_API_BASE]="http://localhost:1234/v1"
os.environ[OPENAI_API_KEY]="lm-studio"
```

#### Groq API
```sh
OPENAI_API_KEY=your-groq-api-key
OPENAI_MODEL_NAME='llama3-8b-8192'
OPENAI_API_BASE=https://api.groq.com/openai/v1
os.environ[OPENAI_API_KEY]=your-groq-api-key
os.environ[OPENAI_MODEL_NAME]='llama3-8b-8192'
os.environ[OPENAI_API_BASE]=https://api.groq.com/openai/v1
```

#### Mistral API
```sh
OPENAI_API_KEY=your-mistral-api-key
OPENAI_API_BASE=https://api.mistral.ai/v1
OPENAI_MODEL_NAME="mistral-small"
os.environ[OPENAI_API_KEY]=your-mistral-api-key
os.environ[OPENAI_API_BASE]=https://api.mistral.ai/v1
os.environ[OPENAI_MODEL_NAME]="mistral-small"
```

### Solar
```python
```sh
from langchain_community.chat_models.solar import SolarChat
# Initialize language model
os.environ["SOLAR_API_KEY"] = "your-solar-api-key"
llm = SolarChat(max_tokens=1024)
```
```sh
os.environ[SOLAR_API_BASE]="https://api.upstage.ai/v1/solar"
os.environ[SOLAR_API_KEY]="your-solar-api-key"
```

# Free developer API key available here: https://console.upstage.ai/services/solar
# Langchain Example: https://github.com/langchain-ai/langchain/pull/18556
```

### text-gen-web-ui
```sh
OPENAI_API_BASE=http://localhost:5000/v1
OPENAI_MODEL_NAME=NA
OPENAI_API_KEY=NA
```

### Cohere
```python
Expand All @@ -190,10 +164,11 @@ llm = ChatCohere()
### Azure Open AI Configuration
For Azure OpenAI API integration, set the following environment variables:
```sh
AZURE_OPENAI_VERSION="2022-12-01"
AZURE_OPENAI_DEPLOYMENT=""
AZURE_OPENAI_ENDPOINT=""
AZURE_OPENAI_KEY=""

os.environ[AZURE_OPENAI_DEPLOYMENT] = "You deployment"
os.environ["OPENAI_API_VERSION"] = "2023-12-01-preview"
os.environ["AZURE_OPENAI_ENDPOINT"] = "Your Endpoint"
os.environ["AZURE_OPENAI_API_KEY"] = "<Your API Key>"
```

### Example Agent with Azure LLM
Expand All @@ -216,6 +191,5 @@ azure_agent = Agent(
llm=azure_llm
)
```

## Conclusion
Integrating CrewAI with different LLMs expands the framework's versatility, allowing for customized, efficient AI solutions across various domains and platforms.

0 comments on commit 4a7ae8d

Please sign in to comment.