infinity/README.md at main · infiniflow/infinity #938
Labels
ai-platform
model hosts and APIs
llm
Large Language Models
llm-applications
Topics related to practical applications of Large Language Models in various fields
python
Python code, tools, info
RAG
Retrieval Augmented Generation for LLMs
source-code
Code snippets
infinity/README.md at main · infiniflow/infinity
The AI-native database built for LLM applications, providing incredibly fast hybrid search of dense embedding, sparse embedding, tensor and full-text
Document | Benchmark | Twitter | Discord
Infinity is a cutting-edge AI-native database that provides a wide range of search capabilities for rich data types such as dense vector, sparse vector, tensor, full-text, and structured data. It provides robust support for various LLM applications, including search, recommenders, question-answering, conversational AI, copilot, content generation, and many more RAG (Retrieval-augmented Generation) applications.
⚡️ Performance
🌟 Key Features
Infinity comes with high performance, flexibility, ease-of-use, and many features designed to address the challenges facing the next-generation AI applications:
🚀 Incredibly fast
🔮 Powerful search
🍔 Rich data types
Supports a wide range of data types including strings, numerics, vectors, and more.
🎁 Ease-of-use
🎮 Get Started
Infinity supports two working modes, embedded mode and client-server mode. Infinity's embedded mode enables you to quickly embed Infinity into your Python applications, without the need to connect to a separate backend server. The following shows how to operate in embedded mode:
🔧 Deploy Infinity in client-server mode
If you wish to deploy Infinity with the server and client as separate processes, see the Deploy infinity server guide.
🔧 Build from Source
See the Build from Source guide.
📚 Document
📜 Roadmap
See the Infinity Roadmap 2024
🙌 Community
Suggested labels
None
The text was updated successfully, but these errors were encountered: