This project is a proof assistant powered by language models. It allows users to submit proof statements in LaTeX and receive responses in LaTeX notation.
To run this project, you can install the required dependencies by running the following command:
npm install
Before running the application, you need to set up an OpenAI API key. You can obtain an API key from the OpenAI website. Once you have the API key, set it as an environment variable named OPENAI_API_KEY
.
To start the application, run the following command:
npm run dev
This will start the FastAPI server and make the API endpoints available.
-
POST /api/start
: This endpoint is used to start a proof. It expects a JSON payload with aquestion
field containing the proof statement in LaTeX format and amessages
field containing a list of previous chat messages. The response will contain the assistant's reply. -
POST /api/next
: This endpoint is used to advance to the next step of a proof. It expects a JSON payload with aquestion
field containing the current proof statement in LaTeX format and amessages
field containing a list of previous chat messages. The response will contain the assistant's reply for the next step.
-
LangChain Documentation: Official documentation for the LangChain package.
-
OpenAI API Documentation: Documentation for the OpenAI API.