AI-powered search and chat for my memory palace.
All code & data used is 100% open-source.
The dataset comes from supabase database where I store my embeddings from the articles/videos/pdfs I consume. Here is the repo for the LLM Bot: https://github.com/dino1729/LLM_QA_Bot
My Memory Palace provides 2 things:
- A search interface.
- A chat interface.
Search was created with OpenAI Embeddings (text-embedding-ada-002
).
In the app we take the user's search query, generate an embedding, and use the result to find the most similar chunks from my memory palace.
The comparison is done using cosine similarity across our database of vectors.
Our database is a Postgres database with the pgvector extension hosted on Supabase.
Results are ranked by similarity score and returned to the user.
Chat builds on top of search. It uses search results to create a prompt that is fed into GPT-3.5-turbo-16k.
This allows for a chat-like experience where the user can ask questions about the book and get answers.
Here's a quick overview of how to run it locally.
- Set up OpenAI
You'll need an OpenAI API key to generate embeddings.
- Set up Supabase and create a database and check the memorize box in the LLM QA Bot app to upload the embeddings to the memory palace database.
- Clone repo
git clone https://github.com/mckaywrigley/paul-graham-gpt.git
- Install dependencies
npm i
- Set up environment variables
Create a .env.local file in the root of the repo with the following variables:
OPENAI_API_KEY=
NEXT_PUBLIC_SUPABASE_URL=
SUPABASE_SERVICE_ROLE_KEY=
- Run app
npm run dev
docker build -t mymemorypalace .
docker run -p 3001:3001 --env-file .env.local --restart always --name mymemorypalace-container mymemorypalace
This is a project forked from https://github.com/mckaywrigley/paul-graham-gpt