diff --git a/integrations/fastrag.md b/integrations/fastrag.md index c8af45b7..b7e7fcb0 100644 --- a/integrations/fastrag.md +++ b/integrations/fastrag.md @@ -11,9 +11,26 @@ repo: https://github.com/IntelLabs/fastRAG type: Custom Component report_issue: https://github.com/IntelLabs/fastRAG/issues logo: /logos/intel-labs.png +version: Haystack 2.0 --- -fast**RAG** is a research framework, that extends [Haystack](https://github.com/deepset-ai/haystack), with abilities to build ***efficient*** and ***optimized*** retrieval augmented generative pipelines (with emphasis on ***Intel hardware***), incorporating state-of-the-art LLMs and Information Retrieval modules. +fast**RAG** is a research framework for ***efficient*** and ***optimized*** retrieval augmented generative pipelines, +incorporating state-of-the-art LLMs and Information Retrieval. fastRAG is designed to empower researchers and developers +with a comprehensive tool-set for advancing retrieval augmented generation. + +Comments, suggestions, issues and pull-requests are welcomed! ❤️ + +> **IMPORTANT** +> +> Now compatible with Haystack v2+. Please report any possible issues you find. + +## 📣 Updates + +- **2024-05**: fastRAG V3 is Haystack 2.0 compatible 🔥 +- **2023-12**: Gaudi2 and ONNX runtime support; Optimized Embedding models; Multi-modality and Chat demos; [REPLUG](https://arxiv.org/abs/2301.12652) text generation. +- **2023-06**: ColBERT index modification: adding/removing documents. +- **2023-05**: [RAG with LLM and dynamic prompt synthesis example](https://github.com/IntelLabs/fastRAG/blob/main/examples/rag-prompt-hf.ipynb). +- **2023-04**: Qdrant `DocumentStore` support. ## Key Features @@ -21,9 +38,9 @@ fast**RAG** is a research framework, that extends [Haystack](https://github.com/ - **Optimized for Intel Hardware**: Leverage [Intel extensions for PyTorch (IPEX)](https://github.com/intel/intel-extension-for-pytorch), [🤗 Optimum Intel](https://github.com/huggingface/optimum-intel) and [🤗 Optimum-Habana](https://github.com/huggingface/optimum-habana) for *running as optimal as possible* on Intel® Xeon® Processors and Intel® Gaudi® AI accelerators. - **Customizable**: fastRAG is built using [Haystack](https://github.com/deepset-ai/haystack) and HuggingFace. All of fastRAG's components are 100% Haystack compatible. -## Components +## 🚀 Components -For a brief overview of the various unique components in fastRAG refer to the [Components Overview]([components.md](https://github.com/IntelLabs/fastRAG/blob/main/components.md)) page. +For a brief overview of the various unique components in fastRAG refer to the [Components Overview](https://github.com/IntelLabs/fastRAG/blob/main/components.md) page.
ONNX Runtime | Running LLMs with optimized ONNX-runtime | +
OpenVINO | +Running quantized LLMs using OpenVINO | +
Llama-CPP | Running RAG Pipelines with LLMs on a Llama CPP backend | @@ -80,7 +101,7 @@ For a brief overview of the various unique components in fastRAG refer to the [C