Skip to content

Commit

Permalink
feat: ob-tuner updated to use in memory ORM, controllable by .env file.
Browse files Browse the repository at this point in the history
  • Loading branch information
svange committed Oct 4, 2023
1 parent ddddf5a commit 47ba771
Show file tree
Hide file tree
Showing 2 changed files with 25 additions and 21 deletions.
6 changes: 4 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@

🚧 **Under active development. Not ready for use.** 🚧

OpenBrain is a chat platform backed by Large Language Model (LLM) agents. It provides APIs and tools to configure, store, and retrieve chat agents, making your chat sessions more versatile and context-aware.
OpenBrain is a tool-wielding, cloud native, LLM agent platform. It provides APIs and tools to configure, store, and retrieve LangChain agents, making your chat sessions and workflows stateful and persistent. You will find in-memory and DynamoDB mixins for the ORM as well as a SAM template for deploying the necessary resources for stateless agents to your AWS. The use of these mixins is controlled by environment variables.

OpenBrain agents are stateful by nature, so they can remember things about you and your conversation. They can also use tools, so you can use the same agent to chat and to perform actions. This project provides a mechanisms to integrate with an API to store the state of the agent as a session, so that the agent can be used asynchronously from any source in a serverless environment.

Expand All @@ -45,14 +45,16 @@ pip install openbrain
cp .env.example .env # Edit this file with your own values
```
### Deploy Supporting Infrastructure
:warning: **This will deploy resources to your AWS account.** :warning: You will be charged for these resources. See [AWS Pricing](https://aws.amazon.com/pricing/) for more information.

```bash
python ci_cd.py -I
```

## Using OpenBrain

### OpenBrain Gradio Tuner
To facilitate tuning agent parameters and experimenting with prompts, OpenBrain provides a GUI interface using Gradio. To launch the GUI.
To facilitate tuning agent parameters and experimenting with prompts, OpenBrain provides a GUI interface using Gradio. You can use the in-memory ORM mixin to store your agent configurations locally. This is controllable by setting `GRADIO_LOCAL=True`, `ORM_LOCAL=True`, `UTIL_LOCAL=True` in your `.env` file.

![img.png](img.png)

Expand Down
40 changes: 21 additions & 19 deletions openbrain/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@
from openbrain.orm.model_chat_message import ChatMessage
from openbrain.util import Util


load_dotenv()

GRADIO_LOCAL = os.environ.get("GRADIO_LOCAL", False)
Expand All @@ -19,19 +20,10 @@
DEFAULT_CLIENT_ID = "public"
DEFAULT_PROFILE_NAME = "default"

logging.basicConfig(filename="app.log", encoding="utf-8", level=logging.DEBUG)


# CLIENT_ID = "gradio-tuner"
if GRADIO_LOCAL:
from openbrain.orm.model_common_base import InMemoryDb


# def store_key(profile_name):
# # Retrieve the user preferences from the DynamoDB database
# agent_config = AgentConfig.get(profile_name=profile_name, client_id=CLIENT_ID)
#
# # Store the personalization key and user preferences for the session
# # For now, we'll just return the user preferences
# return agent_config
logging.basicConfig(filename="app.log", encoding="utf-8", level=logging.DEBUG)


def chat(message, chat_history, _profile_name, session_state, _client_id):
Expand Down Expand Up @@ -63,7 +55,7 @@ def chat(message, chat_history, _profile_name, session_state, _client_id):
session_state["last_response"] = response_message
session_state["session"] = session

chat_history.append(message, response_message)
chat_history.append([message, response_message])

# Return the response from the API
return ["", chat_history, session_state]
Expand Down Expand Up @@ -112,7 +104,7 @@ def reset(
session_state["last_response"] = response
response_message = response.json()["message"]
message = f"Please wait, fetching new agent...\n\n{response_message}"
chat_history.append(message, response.json()["message"])
chat_history.append([message, response_message])

# Return the response from the API
return ["", chat_history, session_state]
Expand Down Expand Up @@ -205,11 +197,21 @@ def auth(username, password):
def get_available_profile_names() -> list:
# logger.warning("get_available_profile_names() is not implemented")
# Get AgentConfig table
table = boto3.resource("dynamodb").Table(Util.AGENT_CONFIG_TABLE_NAME)
# get all items in the table
response = table.scan()
# return the profile names with client_id == 'public'
return [item["profile_name"] for item in response["Items"] if item["client_id"] == DEFAULT_CLIENT_ID]
if GRADIO_LOCAL:
try:
lst = list(InMemoryDb.instance[Util.AGENT_CONFIG_TABLE_NAME][DEFAULT_CLIENT_ID].keys())
return lst
except Exception:
default_config = AgentConfig(client_id=DEFAULT_CLIENT_ID, profile_name=DEFAULT_PROFILE_NAME)
default_config.save()
lst = list(InMemoryDb.instance[Util.AGENT_CONFIG_TABLE_NAME][DEFAULT_CLIENT_ID].keys())
return lst
table = boto3.resource("dynamodb").Table(Util.AGENT_CONFIG_TABLE_NAME)
# get all items in the table
response = table.scan()
# return the profile names with client_id == 'public'
return [item["profile_name"] for item in response["Items"] if item["client_id"] == DEFAULT_CLIENT_ID]



with gr.Blocks(theme="JohnSmith9982/small_and_pretty") as main_block:
Expand Down

0 comments on commit 47ba771

Please sign in to comment.