Skip to content

Commit

Permalink
Merge pull request #51 from Deeptechia/main
Browse files Browse the repository at this point in the history
Geppetto v0.2.0
  • Loading branch information
kelyacf authored Apr 16, 2024
2 parents 6dd7073 + 5122f61 commit 5a99adc
Show file tree
Hide file tree
Showing 22 changed files with 867 additions and 138 deletions.
11 changes: 11 additions & 0 deletions .github/dependabot.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# To get started with Dependabot version updates, you'll need to specify which
# package ecosystems to update and where the package manifests are located.
# Please see the documentation for all configuration options:
# https://docs.github.com/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file

version: 2
updates:
- package-ecosystem: "cargo" # See documentation for possible values
directory: "/" # Location of package manifests
schedule:
interval: "weekly"
7 changes: 4 additions & 3 deletions .github/workflows/Sync-Github.yml
Original file line number Diff line number Diff line change
@@ -1,15 +1,16 @@
name: GitlabSync

on:
- push
- delete
push:
branches:
- main

jobs:
sync:
runs-on: ubuntu-latest
name: Git Repo Sync
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: wangchucheng/[email protected]
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/deploy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ jobs:

steps:
- name: Checkout Repository
uses: actions/checkout@v2
uses: actions/checkout@v4

- name: Install SSH Client
run: sudo apt-get update && sudo apt-get install -y openssh-client
Expand Down
10 changes: 7 additions & 3 deletions .github/workflows/tests-python.yml
Original file line number Diff line number Diff line change
@@ -1,15 +1,19 @@

name: Test Geppetto

on: [ push, pull_request]
on:
push:
branches:
- main
- develop

jobs:
Test-Python3:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4

- uses: actions/setup-python@v4
- uses: actions/setup-python@v5
with:
python-version: '3.x'

Expand Down
29 changes: 17 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,16 +7,19 @@
<img src="./assets/GeppettoMini.png" alt="Geppetto Logo"/>
</p>

Geppetto is a Slack bot for teams to easily interact with ChatGPT. It integrates with OpenAI's ChatGPT-4 and DALL-E-3 models. This project is brought to you by [DeepTechia](https://deeptechia.io/), where the future of technology meets today’s business needs.
Geppetto is a sophisticated Slack bot that facilitates seamless interaction with multiple AI models, including OpenAI's ChatGPT-4, DALL-E-3, and Google's Gemini model. This versatility allows for a variety of AI-driven interactions tailored to team requirements. This project is brought to you by [DeepTechia](https://deeptechia.io/), where the future of technology meets today’s business needs.

## Features

1. **Interaction with ChatGPT-4:**
- You can send direct messages to the application and receive responses from ChatGPT-4.
- Each message generates a conversation thread, and the application uses the message history to formulate coherent responses.
1. **Flexible AI Model Integration and System Management:**
- Users can seamlessly switch between ChatGPT-4-turbo and Gemini to suit their specific interaction needs. ChatGPT-4-turbo is set as the default model.
- You can send direct messages to the application and receive responses from Geppetto. Each message generates a conversation thread, and the application uses the message history to formulate coherent responses.
- The newly introduced LLM controller component allows the user to manage multiple AI models.
- Simplified installation and management process, facilitated by Docker deployment.

2. **Advanced Image Generation with DALL-E-3:**
- Leverage DALL-E-3 to generate creative and contextually relevant images directly within Slack conversations.

2. **Image Generation with DALL-E-3:**
- The application uses DALL-E-3 to generate an image based on the message.

![Geppetto](/assets/Geppetto.gif)

Expand Down Expand Up @@ -72,6 +75,8 @@ Before running the application, copy the `.configuration/.env.example` file into
- `SIGNING_SECRET`: Your Signing secret to verify Slack requests (from your Slack App Credentials).
- `DALLE_MODEL`: The OpenAI DALL-E-3 model.
- `CHATGPT_MODEL`: The OpenAI ChatGPT-4 model.
- `GEMINI_MODEL`: The Gemini model.
- `GOOGLE_API_KEY`: The Google Gemini API key.

## Deployment

Expand All @@ -89,11 +94,11 @@ Follow these steps to deploy Geppetto:
Enjoy interacting with ChatGPT-4 and DALL-E-3 on Slack!

## Docker
To run geppetto in a docker container, when you have Docker & Docker compose installed:
1. Move docker-compose.example.yml to docker-compose.yml with customizing where your config folder resides
2. Change the config values in config/.env
3. Run docker compose build
4. Run docker compose up -d
To run geppetto in a docker container, when you have Docker and Docker compose installed:
1. Move `docker-compose.example.yml` to `docker-compose.yml`, specifying where your config folder resides.
2. Change the config values in `config/.env`.
3. Run `docker compose build`.
4. Run `docker compose up -d`.

## Tests

Expand All @@ -104,7 +109,7 @@ or `python -m unittest -v` for a verbose more specific output

## About DeepTechia

We are DeepTechia, where the future of technology meets today’s business needs. As pioneers in the digital realm, we’ve made it our mission to bridge the gap between innovation and practicality, ensuring that businesses not only survive but thrive in an ever-evolving technological landscape.
We are [DeepTechia](https://deeptechia.io/), where the future of technology meets today’s business needs. As pioneers in the digital realm, we’ve made it our mission to bridge the gap between innovation and practicality, ensuring that businesses not only survive but thrive in an ever-evolving technological landscape.

Born from a passion for cutting-edge technology and a vision for a digitally integrated future, DeepTechia was established to be more than just a tech consultancy. We are visionaries, strategists, and implementers, dedicated to pushing the boundaries of what’s possible while ensuring real-world applicability.

Expand Down
4 changes: 3 additions & 1 deletion config/.env.example
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
SLACK_BOT_TOKEN = "YOUR_TOKEN"
SLACK_APP_TOKEN = "YOUR_TOKEN"
OPENAI_API_KEY = "YOUR_TOKEN"
CHATGPT_MODEL = "gpt-4"
CHATGPT_MODEL = "gpt-4-turbo"
DALLE_MODEL = "dall-e-3"
SIGNING_SECRET = "YOUR_SECRET"
GOOGLE_API_KEY = "YOUR_TOKEN"
GEMINI_MODEL = "gemini-pro"
1 change: 1 addition & 0 deletions config/allowed-slack-ids.json
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
{
"*":"*",
"User A": "#MemberIDUserA",
"User B": "#MemberIDUSerB"
}
3 changes: 1 addition & 2 deletions config/default_responses.json
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,7 @@
"features": {
"personality": "You are Geppetto, a general intelligence bot created by DeepTechia."
},
"dalle": { "preparing_image": "Preparing image.." },
"user": {
"permission_denied": "The requesting user does not belong to the list of allowed users. Request permission to use the app"
}
}
}
10 changes: 10 additions & 0 deletions geppetto/exceptions.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# Geppetto Exceptions


class InvalidThreadFormatError(KeyError):
"""Invalid thread format.
Raise if the submitted thread format doesn't have the expected layout.
Since the UIs and the underlying LLM engines must meet an interface,
some validations have to be undertaken to assure key fields.
"""
61 changes: 61 additions & 0 deletions geppetto/gemini_handler.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
from urllib.request import urlopen
import logging

from .exceptions import InvalidThreadFormatError
from .llm_api_handler import LLMHandler
from dotenv import load_dotenv
from typing import List, Dict
import os
import textwrap
import google.generativeai as genai
from IPython.display import display
from IPython.display import Markdown

load_dotenv(os.path.join("config", ".env"))

GOOGLE_API_KEY = os.getenv("GOOGLE_API_KEY")
GEMINI_MODEL=os.getenv("GEMINI_MODEL", "gemini-pro")
MSG_FIELD = "parts"
MSG_INPUT_FIELD = "content"

def to_markdown(text):
text = text.replace('•', ' *')
return Markdown(textwrap.indent(text, '> ', predicate=lambda _: True))

class GeminiHandler(LLMHandler):

def __init__(
self,
personality,
):
super().__init__(
'Gemini',
GEMINI_MODEL,
genai.GenerativeModel(GEMINI_MODEL),
)
self.personality = personality
self.system_role = "system"
self.assistant_role = "model"
self.user_role = "user"
genai.configure(api_key=GOOGLE_API_KEY)

def llm_generate_content(self, user_prompt, status_callback=None, *status_callback_args):
logging.info("Sending msg to gemini: %s" % user_prompt)
if len(user_prompt) >= 2 and user_prompt[0].get('role') == 'user' and user_prompt[1].get('role') == 'user':
merged_prompt = {
'role': 'user',
'parts': [msg['parts'][0] for msg in user_prompt[:2]]
}
user_prompt = [merged_prompt] + user_prompt[2:]
response= self.client.generate_content(user_prompt)
markdown_response = to_markdown(response.text)
return str(markdown_response.data)

def get_prompt_from_thread(self, thread: List[Dict], assistant_tag: str, user_tag: str):
prompt = super().get_prompt_from_thread(thread, assistant_tag, user_tag)
for msg in prompt:
if MSG_INPUT_FIELD in msg:
msg[MSG_FIELD] = [msg.pop(MSG_INPUT_FIELD)]
else:
raise InvalidThreadFormatError("The input thread doesn't have the field %s" % MSG_INPUT_FIELD)
return prompt
30 changes: 30 additions & 0 deletions geppetto/llm_api_handler.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
from abc import ABC, abstractmethod
from typing import List, Dict, Callable
from .exceptions import InvalidThreadFormatError

ROLE_FIELD = "role"

class LLMHandler(ABC):
def __init__(self, name, model, client):
self.name = name
self.model = model
self.client = client

def get_info(self):
return f"Name: {self.name} - Model: {self.model}"

@abstractmethod
def llm_generate_content(self, prompt: str, callback: Callable, *callback_args):
pass

def get_prompt_from_thread(self, thread: List[Dict], assistant_tag: str, user_tag: str):
prompt = []
for msg in thread:
formatted_msg = dict(msg)
if ROLE_FIELD in formatted_msg:
formatted_msg[ROLE_FIELD] = formatted_msg[ROLE_FIELD].replace(assistant_tag, self.assistant_role)
formatted_msg[ROLE_FIELD] = formatted_msg[ROLE_FIELD].replace(user_tag, self.user_role)
prompt.append(formatted_msg)
else:
raise InvalidThreadFormatError("The input thread doesn't have the field %s" % ROLE_FIELD)
return prompt
36 changes: 36 additions & 0 deletions geppetto/llm_controller.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
from typing import List, Type, TypedDict, Dict
from .llm_api_handler import LLMHandler


class LLMCfgRec(TypedDict):
name: str
handler: Type[LLMHandler]
handler_args: Dict


LLMCfgs = List[LLMCfgRec]


class LLMController:

def __init__(self, llm_cfgs: LLMCfgs):
self.llm_cfgs = llm_cfgs
self.handlers = {}

def init_controller(self):
for llm in self.llm_cfgs:
name = llm['name']
self.handlers[name] = self.get_handler(name)

def list_llms(self):
return [x['name'] for x in self.llm_cfgs]

def get_llm_cfg(self, name):
for llm in self.llm_cfgs:
if llm['name'] == name:
return llm
raise ValueError("LLM configuration not found for name: %s" % name)

def get_handler(self, name):
llm_cfg = self.get_llm_cfg(name)
return llm_cfg['handler'](**llm_cfg['handler_args'])
40 changes: 33 additions & 7 deletions geppetto/main.py
Original file line number Diff line number Diff line change
@@ -1,33 +1,59 @@
import os
import logging
from dotenv import load_dotenv

from .llm_controller import LLMController
from .slack_handler import SlackHandler
from .openai_handler import OpenAIHandler
from .gemini_handler import GeminiHandler
from slack_bolt.adapter.socket_mode import SocketModeHandler
from .utils import load_json

load_dotenv(os.path.join("config", ".env"))

OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
DALLE_MODEL = os.getenv("DALLE_MODEL")
CHATGPT_MODEL = os.getenv("CHATGPT_MODEL")

SLACK_BOT_TOKEN = os.getenv("SLACK_BOT_TOKEN_TEST")
SLACK_APP_TOKEN = os.getenv("SLACK_APP_TOKEN_TEST")
SIGNING_SECRET = os.getenv("SIGNING_SECRET_TEST")

DEFAULT_RESPONSES = load_json("default_responses.json")

# Initialize logging
# TODO: log to a file
logging.basicConfig(level=logging.INFO)


def initialized_llm_controller():
controller = LLMController(
[
{
"name": "OpenAI",
"handler": OpenAIHandler,
"handler_args": {
"personality": DEFAULT_RESPONSES["features"]["personality"]
}
},
{
"name": "Gemini",
"handler": GeminiHandler,
"handler_args": {
"personality": DEFAULT_RESPONSES["features"]["personality"]
}
}
]
)
controller.init_controller()
return controller



def main():
Slack_Handler = SlackHandler(
load_json("allowed-slack-ids.json"),
load_json("default_responses.json"),
DEFAULT_RESPONSES,
SLACK_BOT_TOKEN,
SIGNING_SECRET,
OPENAI_API_KEY,
DALLE_MODEL,
CHATGPT_MODEL,
initialized_llm_controller()
)
SocketModeHandler(Slack_Handler.app, SLACK_APP_TOKEN).start()

Expand Down
Loading

0 comments on commit 5a99adc

Please sign in to comment.