From fead4749b920fe6ffda55961b8d3ea75a825c29f Mon Sep 17 00:00:00 2001 From: Leonid Ganeline Date: Tue, 15 Oct 2024 09:38:12 -0700 Subject: [PATCH] docs: `integrations` updates 20 (#27210) Added missed provider pages. Added descriptions and links. Co-authored-by: Erick Friis --- docs/docs/integrations/providers/konlpy.mdx | 21 ++++++++++++ docs/docs/integrations/providers/kuzu.mdx | 32 +++++++++++++++++++ .../integrations/providers/llama_index.mdx | 32 +++++++++++++++++++ .../docs/integrations/providers/llamaedge.mdx | 24 ++++++++++++++ .../docs/integrations/providers/llamafile.mdx | 31 ++++++++++++++++++ 5 files changed, 140 insertions(+) create mode 100644 docs/docs/integrations/providers/konlpy.mdx create mode 100644 docs/docs/integrations/providers/kuzu.mdx create mode 100644 docs/docs/integrations/providers/llama_index.mdx create mode 100644 docs/docs/integrations/providers/llamaedge.mdx create mode 100644 docs/docs/integrations/providers/llamafile.mdx diff --git a/docs/docs/integrations/providers/konlpy.mdx b/docs/docs/integrations/providers/konlpy.mdx new file mode 100644 index 0000000000000..d4d925144e34c --- /dev/null +++ b/docs/docs/integrations/providers/konlpy.mdx @@ -0,0 +1,21 @@ +# KoNLPY + +>[KoNLPy](https://konlpy.org/) is a Python package for natural language processing (NLP) +> of the Korean language. + + +## Installation and Setup + +You need to install the `konlpy` python package. + +```bash +pip install konlpy +``` + +## Text splitter + +See a [usage example](/docs/how_to/split_by_token/#konlpy). + +```python +from langchain_text_splitters import KonlpyTextSplitter +``` diff --git a/docs/docs/integrations/providers/kuzu.mdx b/docs/docs/integrations/providers/kuzu.mdx new file mode 100644 index 0000000000000..286b38e0d81cc --- /dev/null +++ b/docs/docs/integrations/providers/kuzu.mdx @@ -0,0 +1,32 @@ +# Kùzu + +>[Kùzu](https://kuzudb.com/) is a company based in Waterloo, Ontario, Canada. +> It provides a highly scalable, extremely fast, easy-to-use [embeddable graph database](https://github.com/kuzudb/kuzu). + + + +## Installation and Setup + +You need to install the `kuzu` python package. + +```bash +pip install kuzu +``` + +## Graph database + +See a [usage example](/docs/integrations/graphs/kuzu_db). + +```python +from langchain_community.graphs import KuzuGraph +``` + +## Chain + +See a [usage example](/docs/integrations/graphs/kuzu_db/#creating-kuzuqachain). + +```python +from langchain.chains import KuzuQAChain +``` + + diff --git a/docs/docs/integrations/providers/llama_index.mdx b/docs/docs/integrations/providers/llama_index.mdx new file mode 100644 index 0000000000000..4ba7ac0eebae6 --- /dev/null +++ b/docs/docs/integrations/providers/llama_index.mdx @@ -0,0 +1,32 @@ +# LlamaIndex + +>[LlamaIndex](https://www.llamaindex.ai/) is the leading data framework for building LLM applications + + +## Installation and Setup + +You need to install the `llama-index` python package. + +```bash +pip install llama-index +``` + +See the [installation instructions](https://docs.llamaindex.ai/en/stable/getting_started/installation/). + +## Retrievers + +### LlamaIndexRetriever + +>It is used for the question-answering with sources over an LlamaIndex data structure. + +```python +from langchain_community.retrievers.llama_index import LlamaIndexRetriever +``` + +### LlamaIndexGraphRetriever + +>It is used for question-answering with sources over an LlamaIndex graph data structure. + +```python +from langchain_community.retrievers.llama_index import LlamaIndexGraphRetriever +``` diff --git a/docs/docs/integrations/providers/llamaedge.mdx b/docs/docs/integrations/providers/llamaedge.mdx new file mode 100644 index 0000000000000..64fbb50d389db --- /dev/null +++ b/docs/docs/integrations/providers/llamaedge.mdx @@ -0,0 +1,24 @@ +# LlamaEdge + +>[LlamaEdge](https://llamaedge.com/docs/intro/) is the easiest & fastest way to run customized +> and fine-tuned LLMs locally or on the edge. +> +>* Lightweight inference apps. `LlamaEdge` is in MBs instead of GBs +>* Native and GPU accelerated performance +>* Supports many GPU and hardware accelerators +>* Supports many optimized inference libraries +>* Wide selection of AI / LLM models + + + +## Installation and Setup + +See the [installation instructions](https://llamaedge.com/docs/user-guide/quick-start-command). + +## Chat models + +See a [usage example](/docs/integrations/chat/llama_edge). + +```python +from langchain_community.chat_models.llama_edge import LlamaEdgeChatService +``` diff --git a/docs/docs/integrations/providers/llamafile.mdx b/docs/docs/integrations/providers/llamafile.mdx new file mode 100644 index 0000000000000..d0d0f268d100b --- /dev/null +++ b/docs/docs/integrations/providers/llamafile.mdx @@ -0,0 +1,31 @@ +# llamafile + +>[llamafile](https://github.com/Mozilla-Ocho/llamafile) lets you distribute and run LLMs +> with a single file. + +>`llamafile` makes open LLMs much more accessible to both developers and end users. +> `llamafile` is doing that by combining [llama.cpp](https://github.com/ggerganov/llama.cpp) with +> [Cosmopolitan Libc](https://github.com/jart/cosmopolitan) into one framework that collapses +> all the complexity of LLMs down to a single-file executable (called a "llamafile") +> that runs locally on most computers, with no installation. + + +## Installation and Setup + +See the [installation instructions](https://github.com/Mozilla-Ocho/llamafile?tab=readme-ov-file#quickstart). + +## LLMs + +See a [usage example](/docs/integrations/llms/llamafile). + +```python +from langchain_community.llms.llamafile import Llamafile +``` + +## Embedding models + +See a [usage example](/docs/integrations/text_embedding/llamafile). + +```python +from langchain_community.embeddings import LlamafileEmbeddings +```