Chatbotting made beautiful with gmessage - a visual treat for local conversations.
gmessage is an easy and lite way to get started with a locally running LLM on your computer.
We are currently in alpha and mainly targeting OSX however the project should work on Linux and Windows as well it just hasn't been tested yet.
Contributions are more than welcome! and bugs are expected please report them here
The fastest way is to use the pre-built docker image.
docker run -p 10999:10999 drbh/gmessage:v0.0.0
**This method will run the server and allow you to interact with it via the web app. This deployment is similar to the cloud deployment since it skips the desktop app and only runs the server and web app.
Try out the app without installing anything by visiting gmessage.xyz. NOTE, this example mindlessly returns a one liner joke no matter what you say to it becuase running the full model on fast servers is expensive. If you want to try out the full model locally you can follow the instructions below.
- β Easy to use
- β Beautiful UI
- β Easy to install
- β Lots of themes
- β Search chat history
- β Create, view multiple chats
- β Text to speech
- β Export chat to JSON file
- β Menubar, Desktop & Web apps built-in
- β View and manage models
- β Locally running LLM server
- β Dockerized
- β Cloud deployment ready
- β’οΈ This is experimental software and should not be used in production!
- β’οΈ More features to come!
Since we respond to and return the same JSON format as the OpenAI API you can use the same python code to interact with gmessage as you would with the OpenAI API.
import openai
openai.api_key = ""
openai.api_base = "http://localhost:10999/api"
response = openai.Completion.create(
model="gpt4all-mpt-7b",
messages=[
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello there."
},
{
"role": "assistant",
"content": "Hi, how can I help you?"
},
{
"role": "user",
"content": "Reverse a list in Python."
}
]
)
print(response.choices[0])
This method will build the server and the desktop app on your computer and can be bundled into a single executable for native use.
make
bin/gmessage
You can compile your own docker image and run it locally or on any cloud provider that supports docker.
# build it yourself
docker build -t gmessage .
docker run -p 10999:10999 gmessage
Fly.io provides an easy way to deploy containerized apps to the cloud. Below are a few steps that result in a running gmessage app on the cloud.
flyctl launch # flyctl deploy (after you've created an app)
fly scale vm shared-cpu-4x
fly scale memory 8192
# at the time of writing;
# the cost of 4 vCPUs and 8GB of RAM is $0.0000165/s ($42.79/mo)
# check out https://fly.io/docs/about/pricing/ for up to date info
fly scale show
# VM Resources for app: gmessage
# Groups
# NAME COUNT KIND CPUS MEMORY REGIONS
# app 1 shared 4 8192 MB bos
# now open the app in your browser
open 'https://gmessage.fly.dev/'
# when you're done you can delete the app
fly destroy gmessage
Open source AI is in rapid development and is improving every day. However, these models are still in their infancy and have a long way to go before they can be used in production. They are often slower, and produce less coherent results than their commercial counterparts. Over time gmesssage
will improve as the underlying models improve but for now it is best used for hacking, experimentation and research.