Skip to content

Commit

Permalink
chore: Update README.MD
Browse files Browse the repository at this point in the history
  • Loading branch information
kaancayli committed Sep 21, 2024
1 parent 8dd097b commit 1015f7c
Showing 1 changed file with 222 additions and 82 deletions.
304 changes: 222 additions & 82 deletions README.MD
Original file line number Diff line number Diff line change
@@ -1,126 +1,266 @@
# Pyris V2
Pyris is an intermediary system that links the [Artemis](https://github.com/ls1intum/Artemis) platform with various Large Language Models (LLMs). It provides a REST API that allows Artemis to interact with different pipelines based on specific tasks.

Pyris is an intermediary system that connects the [Artemis](https://github.com/ls1intum/Artemis) platform with various Large Language Models (LLMs). It provides a REST API that allows Artemis to interact with different pipelines based on specific tasks.

Currently, Pyris powers [Iris](https://artemis.cit.tum.de/about-iris), a virtual AI tutor that assists students with their programming exercises on Artemis in a pedagogically meaningful way.

## Table of Contents
- [Features](#features)
- [Setup](#setup)
- [Prerequisites](#prerequisites)
- [Local Environment Setup](#local-environment-setup)
- [Docker Setup](#docker-setup)
- [Development Environment](#development-environment)
- [Production Environment](#production-environment)
- [Customizing Configuration](#customizing-configuration)
- [Troubleshooting](#troubleshooting)
- [Additional Notes](#additional-notes)

## Features
- **Modular Design**: Pyris is built to be modular, allowing for integration of new models and pipelines. This design helps the system adapt to different requirements.
- **RAG Support**: Pyris implements Retrieval-Augmented Generation (RAG) using [Weaviate](https://weaviate.io/), a vector database. This feature enables the generation of responses based on retrieved context, potentially improving the relevance of outputs.
- **Flexible Pipelines**: The system supports various pipelines that can be selected depending on the task at hand, providing versatility in handling different types of requests.

Currently, Pyris empowers [Iris](https://artemis.cit.tum.de/about-iris), a virtual AI Tutor that helps students with their programming exercises on Artemis in a didactically meaningful way.
- **Exercise Support**: Empowers Iris to provide feedback on programming exercises, enhancing the learning experience for students. Iris analyzes submitted code, feedback, and build logs generated by Artemis to provide detailed insights.

- **Course Content Support**: Leverages RAG (Retrieval-Augmented Generation) to enable Iris to provide detailed explanations for course content, making it easier for students to understand complex topics based on instructor-provided learning materials.

- **Competency Generation**: Automates the generation of competencies for courses, reducing manual effort in creating Artemis competencies.

## Setup
### With local environment
> **⚠️ Warning:** To change the local Weaviate vector database setup, please refer to [Weaviate Docs](https://weaviate.io/developers/weaviate/quickstart).

- Check python version: `python --version` (should be 3.12)
- Install packages: `pip install -r requirements.txt`
- Create an `application.local.yml` file in the root directory. This file includes configurations that can be used by the application.
- Example `application.local.yml`:
### Prerequisites

- **Python 3.12**: Ensure that Python 3.12 is installed.
```bash
python --version
```
- **Docker and Docker Compose**: Required for containerized deployment.

### Local Environment Setup

> **Note:** If you need to modify the local Weaviate vector database setup, please refer to the [Weaviate Documentation](https://weaviate.io/developers/weaviate/quickstart).
#### Steps

1. **Install Dependencies**

Install the required Python packages:

```bash
pip install -r requirements.txt
```

2. **Create Configuration Files**

- **`application.local.yml`**

Create an `application.local.yml` file in the root directory. This file includes configurations used by the application.

```yaml
api_keys:
- token: "secret"
api_keys:
- token: "your-secret-token"

weaviate:
host: "localhost"
port: "8001"
grpc_port: "50051"
weaviate:
host: "localhost"
port: "8001"
grpc_port: "50051"

env_vars:
test: "test"
env_vars:
test: "test-value"
```
- Create an `llm-config.local.yml` file in the root directory. This file includes a list of models with their configurations that can be used by the application.
- Example `llm-config.local.yml`:
- **`llm-config.local.yml`**

Create an `llm-config.local.yml` file in the root directory. This file includes a list of models with their configurations.

```yaml
- id: "<model-id>"
name: "<custom-model-name>"
description: "<model-description>"
type: "<model-type>, e.g. azure-chat, ollama"
endpoint: "<your-endpoint>"
api_version: "<your-api-version>"
azure_deployment: "<your-azure-deployment-name>"
model: "<model>, e.g. gpt-3.5-turbo"
api_key: "<your-api-key>"
tools: []
capabilities:
input_cost: 0.5
output_cost: 1.5
gpt_version_equivalent: 3.5
context_length: 16385
vendor: "<your-vendor>"
privacy_compliance: True
self_hosted: False
image_recognition: False
json_mode: True
- id: "model-1"
name: "Custom Model Name"
description: "Description of your model"
type: "azure-chat" # e.g., azure-chat, ollama
endpoint: "https://your-model-endpoint"
api_version: "v1"
azure_deployment: "your-azure-deployment-name"
model: "gpt-3.5-turbo"
api_key: "your-api-key"
tools: []
capabilities:
input_cost: 0.5
output_cost: 1.5
gpt_version_equivalent: 3.5
context_length: 16385
vendor: "Your Vendor"
privacy_compliance: True
self_hosted: False
image_recognition: False
json_mode: True
```
- Each model configuration in the `llm-config.local.yml` file also include capabilities that will be used by the application to select the best model for a specific task.

#### Run server
- Run server:
```[bash]
APPLICATION_YML_PATH=<path-to-your-application-yml-file> LLM_CONFIG_PATH=<path-to-your-llm-config-yml> uvicorn app.main:app --reload
```
- Access API docs: http://localhost:8000/docs
> **Note:** Each model configuration includes capabilities that help the application select the best model for a specific task.

3. **Run the Server**

Start the Pyris server:

```bash
APPLICATION_YML_PATH=./application.local.yml \
LLM_CONFIG_PATH=./llm-config.local.yml \
uvicorn app.main:app --reload
```

### Getting Started with Docker
4. **Access API Documentation**

Deploying Pyris using Docker is a straightforward way to set up the application in a consistent environment. Here's how you can do it:
Open your browser and navigate to [http://localhost:8000/docs](http://localhost:8000/docs) to access the interactive API documentation.

**Build and Run the Containers**
### Docker Setup

The essential part of setting up Pyris with Docker is building and running the containers. Docker Compose is used to manage the different services, including Pyris, Weaviate, and Nginx, for both development and production environments.
Deploying Pyris using Docker ensures a consistent environment and simplifies the deployment process.

- **For Development:**
#### Prerequisites

To start the development environment, run:
- **Docker**: Install Docker from the [official website](https://www.docker.com/get-started).
- **Docker Compose**: Comes bundled with Docker Desktop or install separately on Linux.

#### Docker Compose Files

- **Development**: `docker-compose/pyris-dev.yml`
- **Production with Nginx**: `docker-compose/pyris-production.yml`
- **Production without Nginx**: `docker-compose/pyris-production-internal.yml`

#### Running the Containers

##### **Development Environment**

1. **Start the Containers**

```bash
docker-compose -f docker-compose/pyris-dev.yml up --build
```

This command will:
- Build the Pyris application.
- Start the Pyris application along with Weaviate in development mode.
- Mount local configuration files for easy modification.
- Builds the Pyris application.
- Starts Pyris and Weaviate in development mode.
- Mounts local configuration files for easy modification.

2. **Access the Application**

- Application URL: [http://localhost:8000](http://localhost:8000)
- API Docs: [http://localhost:8000/docs](http://localhost:8000/docs)

##### **Production Environment**

The application will be available at `http://localhost:8000`.
###### **Option 1: With Nginx**

- **For Production:**
1. **Prepare SSL Certificates**

To start the production environment, run:
- Place your SSL certificate (`fullchain.pem`) and private key (`priv_key.pem`) in the specified paths or update the paths in the Docker Compose file.

2. **Start the Containers**

```bash
docker-compose -f docker-compose/pyris-production.yml up -d
```

This command will:
- Pull the latest Pyris image from the GitHub Container Registry.
- Start Pyris along with Weaviate and Nginx in production mode.
- Nginx will handle SSL and serve as a reverse proxy.
- Pulls the latest Pyris image.
- Starts Pyris, Weaviate, and Nginx.
- Nginx handles SSL termination and reverse proxying.

3. **Access the Application**

- Application URL: `https://your-domain.com`

###### **Option 2: Without Nginx**

1. **Start the Containers**

```bash
docker-compose -f docker-compose/pyris-production-internal.yml up -d
```

- Pulls the latest Pyris image.
- Starts Pyris and Weaviate.

2. **Access the Application**

- Application URL: [http://localhost:8000](http://localhost:8000)

#### Managing the Containers

- **Stop the Containers**

```bash
docker-compose -f <compose-file> down
```

Replace `<compose-file>` with the appropriate Docker Compose file.

- **View Logs**

```bash
docker-compose -f <compose-file> logs -f <service-name>
```

Example:

```bash
docker-compose -f docker-compose/pyris-dev.yml logs -f pyris-app
```

- **Rebuild Containers**

If you've made changes to the code or configurations:

```bash
docker-compose -f <compose-file> up --build
```

## Customizing Configuration

- **Environment Variables**

You can customize settings using environment variables:

- `PYRIS_DOCKER_TAG`: Specifies the Pyris Docker image tag.
- `PYRIS_APPLICATION_YML_FILE`: Path to your `application.yml` file.
- `PYRIS_LLM_CONFIG_YML_FILE`: Path to your `llm-config.yml` file.
- `PYRIS_PORT`: Host port for Pyris application (default is `8000`).
- `WEAVIATE_PORT`: Host port for Weaviate REST API (default is `8001`).
- `WEAVIATE_GRPC_PORT`: Host port for Weaviate gRPC interface (default is `50051`).

- **Configuration Files**

Modify configuration files as needed:

- **Pyris Configuration**: Update `application.yml` and `llm-config.yml`.
- **Weaviate Configuration**: Adjust settings in `weaviate.yml`.
- **Nginx Configuration**: Modify Nginx settings in `nginx.yml` and related config files.

## Troubleshooting

- **Port Conflicts**

If you encounter port conflicts, change the host ports using environment variables:

```bash
export PYRIS_PORT=8080
```

The application will be available at `https://<your-domain>`.
- **Permission Issues**

#### Additional Steps (Optional)
Ensure you have the necessary permissions for files and directories, especially for SSL certificates.

If you need to stop the containers, use:
- **Docker Resources**

```bash
docker-compose -f docker-compose/pyris-dev.yml down
```
If services fail to start, ensure Docker has sufficient resources allocated.

or
## Additional Notes

```bash
docker-compose -f docker-compose/pyris-production.yml down
```
- **Accessing Services Internally**

You can also customize the configuration for Weaviate, Pyris, and Nginx by editing their respective configuration files. However, the default setup should work fine for most users.
- Within Docker, services can communicate using service names (e.g., `pyris-app`, `weaviate`).

For development, access the API documentation at `http://localhost:8000/docs`. For production, access the application at your domain (e.g., `https://<your-domain>`).
- **SSL Certificates**

If you need to view logs or debug, you can check the logs of specific services using:
- Ensure SSL certificates are valid and properly secured.
- Update paths in the Docker Compose file if necessary.

```bash
docker-compose -f docker-compose/pyris-dev.yml logs pyris-app
```
- **Scaling**

This setup should help you run the Pyris application in both development and production environments with Docker. Ensure you modify the configuration files as per your specific requirements before deploying.
- For increased load, consider scaling services or using orchestration tools like Kubernetes.

0 comments on commit 1015f7c

Please sign in to comment.