Celebro is a powerful AI-powered search and query application designed to provide intelligent and accurate responses by combining web search, embedding models, and advanced text generation. It is built using modern frameworks and technologies to ensure scalability, flexibility, and efficiency.
-
User Authentication
- Signup with unique username and email checks.
- Login with JWT-based authentication.
-
Intelligent Search
- Performs web searches using Serper API.
- Generates AI-enhanced responses using Llama models hosted on Groq.
-
Query History
- Stores user queries and responses for future reference.
-
Embeddings for Advanced Search
- Uses HuggingFace embeddings via Langchain for semantic similarity.
- Stores embeddings in ChromaDB for fast vector-based retrieval.
-
Frontend
- Developed with TypeScript, React, and TailwindCSS for a seamless user experience.
- FastAPI: For building robust REST APIs.
- Langchain: For integrating Llama models and HuggingFace embeddings.
- Groq: Hosting and managing the Llama-based text generation model.
- Serper: For performing web searches.
- ChromaDB: For storing vector embeddings.
- MongoDB: For storing user information and query history.
- React: For building a dynamic and responsive UI.
- TypeScript: Ensures type safety in development.
- TailwindCSS: For styling with utility-first CSS.
Ensure the following are installed on your system:
- Python 3.10+
- Node.js and npm
- Docker (optional for containerization)
- MongoDB Atlas account
-
Clone the repository:
git clone https://github.com/<username>/Celebro.git cd celebro/backend
-
Install dependencies:
pip install -r requirements.txt
-
Set up the environment variables: Create a
.env
file in thebackend
directory with the following content:MONGODB_URI=<your-mongodb-uri> DATABASE_NAME=<your-database-name> GROQ_API_KEY=<your-groq-api-key> SERPER_API_KEY=<your-serper-api-key> EMBEDDINGMODEL=<huggingface-embedding-model> SERPER_SEARCH_URL=<serper-search-url> TEXT_GEN_MODEL=<llama-text-generation-model>
-
Run the server:
uvicorn main:app --host 0.0.0.0 --port 8000
Access the API docs at
http://localhost:8000/docs
.
-
Navigate to the frontend directory:
cd celebro/frontend
-
Install dependencies:
npm install
-
Set up environment variables: Create a
.env
file in thefrontend
directory with the following content:VITE_API_BASE_URL=http://localhost:8000
-
Run the frontend:
npm run dev
Access the application at
http://localhost:5173
.
backend/
├── core/
│ ├── config.py # Configuration settings
├── models/
│ ├── user.py # User model
│ ├── query.py # Query model
├── services/
│ ├── auth_service.py # Handles authentication
│ ├── search_service.py # Integrates Serper and AI response generation
│ ├── chat_service.py # Manages chat functionalities
├── main.py # Entry point for FastAPI
├── requirements.txt # Python dependencies
frontend/
├── src/
│ ├── App.tsx # Main app file
├── public/
│ ├── index.html # HTML template
├── package.json # Frontend dependencies
- Response Personalization: Users can choose the tone of the response.
- Advanced Query Filters: Allow users to filter queries by date or topic.
- Model Tuning: Support for fine-tuning Llama and HuggingFace models.
- Deployment: Host on AWS for scalability.
- Fork the repository.
- Create a feature branch:
git checkout -b feature-name
. - Commit your changes:
git commit -m "Add feature"
. - Push to the branch:
git push origin feature-name
. - Create a pull request.
This project is licensed under the MIT License. See the LICENSE
file for details.