Skip to content

A minimal FastAPI application demonstrating a streaming AI chatbot with OpenAI’s Code Interpreter and a simple Alpine.js front end.

Notifications You must be signed in to change notification settings

carlosplanchon/fastapi_ai_assistant

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

fastapi_ai_assistant

This repository contains a FastAPI application that demonstrates the OpenAI Assistants QuickStart guide, as shown in the official documentation. It implements the step-by-step instructions for creating an Assistant (including Code Interpreter), creating Threads for conversation, adding Messages, and streaming a Run using Server-Sent Events (SSE).

Chat interface screenshot


Overview

  • OpenAI Assistants (Beta)
    This app uses the OpenAI Python Beta SDK to create, configure, and run a GPT-based Assistant.
  • FastAPI
    We wrap the Assistants API logic in a FastAPI server to provide an HTTP interface, including SSE endpoints for real-time token streaming.
  • Streaming Chat UI
    A simple front-end using Alpine.js & TailwindCSS to let users enter a prompt, see the streamed tokens in real time, and stop the generation if desired.

Key Steps from the QuickStart

This application follows the four major steps from OpenAI’s Assistants QuickStart:

  1. Create an Assistant
    Configure a custom name, instructions, and attach any tools (e.g., Code Interpreter) to the GPT model.
  2. Create a Thread
    Start a conversation “session” for the user.
  3. Add Messages
    As the user talks to the Assistant, each message is stored in the Thread.
  4. Run the Assistant
    Generate a response by calling the model (and any tools) on the entire conversation context. In this example, we use streaming to emit tokens to the client in real time.

Features

  1. Real-Time Streaming
    Uses Server-Sent Events (SSE) via sse_starlette to stream tokens as they are generated by the Assistant.
  2. Code Interpreter Tool
    Demonstrates how to enable the built-in code_interpreter tool for advanced math, scripting, and code execution.
  3. FastAPI + Alpine.js
    Lightweight Python backend with a minimal, reactive JavaScript frontend.
  4. Stop Generation
    A Stop button in the UI closes the SSE connection, immediately stopping further token streaming.

Installation

  1. Clone the repository:

    git clone https://github.com/your-username/fastapi_ai_assistant.git
    cd fastapi_ai_assistant
  2. Install dependencies (e.g., requirements.txt):

    pip install -r requirements.txt
  3. Configure OpenAI (Beta) environment:

    • Set your OPENAI_API_KEY environment variable:
      export OPENAI_API_KEY="sk-..."

Usage

  1. Run the FastAPI application:

    python main.py

    or

    uvicorn main:app --host 0.0.0.0 --port 8000
  2. Navigate to:

    http://localhost:8000/
    
  3. Chat with the AI Assistant:

    • Enter a message and press Send.
    • Watch as the response arrives token-by-token.
    • Click Stop to end the generation at any time.

Code Structure

fastapi_ai_assistant/
├── templates/
│   └── index.html        # Contains the chat UI (HTML, Alpine.js, TailwindCSS)
├── main.py               # FastAPI app with SSE and Assistants logic
├── README.md             # This file
└── requirements.txt      # Python dependencies (optional)
  • main.py implements:

    • Assistant Creation: assistant = client.beta.assistants.create(...)
    • Thread Management: thread = client.beta.threads.create()
    • Message Creation: client.beta.threads.messages.create(...)
    • Run Streaming: Uses the runs.stream(...) context manager and a custom SSE event handler.
    • FastAPI Endpoints:
      • GET /: Serves the chat UI.
      • GET /chat-stream?prompt=...: Streams the AI response via SSE.
  • index.html:

    • A Tailwind-styled chat interface with Alpine.js for reactivity.
    • Subscribes to the /chat-stream endpoint using EventSource.
    • Displays real-time tokens as they arrive and offers a Stop button.

Customization

  • Model and Tools:
    Change the model parameter or add more tools in main.py to suit your needs.
    E.g., tools=[{"type": "code_interpreter"}, {"type": "another_tool"}].
  • Instructions:
    Customize the Assistant’s global instructions (role: system) to define its personality or domain knowledge.
  • UI/Styling:
    Modify index.html or integrate a different frontend approach.

Contributing

Feel free to open issues or submit pull requests if you have suggestions or improvements.


License

This project is provided under the MIT License.
See the LICENSE file for more details (if present).


Happy coding! Check out the OpenAI Assistants QuickStart docs for more information. If you have questions or run into any issues, please open an issue.

About

A minimal FastAPI application demonstrating a streaming AI chatbot with OpenAI’s Code Interpreter and a simple Alpine.js front end.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published