MultiModal Gpt (early phase) is a advanced Multi modal AI which supports Texts, Pdfs, Image, Search engine etc
Front-end
- NextJS(Typescript support)
- Markdown Support for Blogs
- Zod
- Tailwind CSS
- Shadcn
- Clerk Authentication (Webhook support)
- Framer Motion
Back-end
- NodeJS(Bun support)
- Express
- Jest (Unit testing)
- Docker
- AWS S3
- Langchain
- Datastax (Vector store)
- Svix (Webhook support)
For deployment
Vercel
for frontendfly.io + Docker
for backend
- Install Node Modules
git clone https://github.com/piyushyadav0191/MultiModal-Gpt
cd MultiModel-Gpt/Multi-modal-Gpt && bun i
cd MultiModel-Gpt/MultiModal-GPT-backend && bun i
-
Env variables -- Copy .env.example variable names and create new .env and fill them your env variables
-
Run Application For frontend and backend both
bun run dev
- Testing
bun test
- Building for production For frontend and backend both
bun run build
- Running in production For frontend and backend both
bun start
- Tests in Cypress, Vitest
- Improve prompts
We welcome contributions! Please follow these steps:
- Fork the repository.
- Create a new branch (
git checkout -b feature-branch
). - Make your changes.
- Commit your changes (
git commit -m 'Add some feature'
). - Push to the branch (
git push origin feature-branch
). - Open a pull request.
This project is licensed under the MIT License - see the LICENSE file for details.
For any questions or feedback, please reach out to us at [email protected].