Skip to content

Latest commit

 

History

History
34 lines (22 loc) · 1.94 KB

README.md

File metadata and controls

34 lines (22 loc) · 1.94 KB

Developer Search Engine

This project was developed for the Round 3 interview of Google Developer Student Clubs (GDSC-HCMUS). The selected tech stack is React & Next.js, as specified in the project requirements. Released on October 28, with a submission deadline of November 1, this project aims to deliver a local AI-powered search engine. This enables developers to quickly search GitHub repositories and access relevant information with the assistance of a Large Language Model (LLM). This LLM-based assistant helps provide in-depth insights into the topics developers are exploring, enhancing productivity by offering intelligent recommendations and explanations.

Watch demo video HERE

Getting Started

Environment Setup

Since our project hosts LLM models locally and makes requests through a REST API, we do not deploy it on Vercel. To set up a local environment with all necessary tools for development and deployment, we recommend using the following Docker image:

docker pull kylepaul/deeplearning:deployment

Compose the Docker container by referring to the compose.yml file in this repository for detailed setup instructions.

Next, ensure that Node.js is up to date by following this installation guide. Once installed, set the NODE_OPTIONS environment variable:

export NODE_OPTIONS=--openssl-legacy-provider

Deploying the AI Server

For local AI hosting, we use the Qwen2.5-Coder-1.5B-Instruct-GPTQ-Int8 model, a robust yet efficient model for our needs. You can deploy it with Triton Inference Server or via the Qwen framework for seamless integration.

Launching the Website

To start the website, run the following command:

npm run dev

Then open http://localhost:3000 in your browser to view the application.