Skip to content

Commit

Permalink
Merge branch 'master' into master
Browse files Browse the repository at this point in the history
  • Loading branch information
luiz0992 authored Oct 15, 2024
2 parents 450a3b3 + fead474 commit 60f30fd
Show file tree
Hide file tree
Showing 5 changed files with 140 additions and 0 deletions.
21 changes: 21 additions & 0 deletions docs/docs/integrations/providers/konlpy.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# KoNLPY

>[KoNLPy](https://konlpy.org/) is a Python package for natural language processing (NLP)
> of the Korean language.

## Installation and Setup

You need to install the `konlpy` python package.

```bash
pip install konlpy
```

## Text splitter

See a [usage example](/docs/how_to/split_by_token/#konlpy).

```python
from langchain_text_splitters import KonlpyTextSplitter
```
32 changes: 32 additions & 0 deletions docs/docs/integrations/providers/kuzu.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# Kùzu

>[Kùzu](https://kuzudb.com/) is a company based in Waterloo, Ontario, Canada.
> It provides a highly scalable, extremely fast, easy-to-use [embeddable graph database](https://github.com/kuzudb/kuzu).


## Installation and Setup

You need to install the `kuzu` python package.

```bash
pip install kuzu
```

## Graph database

See a [usage example](/docs/integrations/graphs/kuzu_db).

```python
from langchain_community.graphs import KuzuGraph
```

## Chain

See a [usage example](/docs/integrations/graphs/kuzu_db/#creating-kuzuqachain).

```python
from langchain.chains import KuzuQAChain
```


32 changes: 32 additions & 0 deletions docs/docs/integrations/providers/llama_index.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# LlamaIndex

>[LlamaIndex](https://www.llamaindex.ai/) is the leading data framework for building LLM applications

## Installation and Setup

You need to install the `llama-index` python package.

```bash
pip install llama-index
```

See the [installation instructions](https://docs.llamaindex.ai/en/stable/getting_started/installation/).

## Retrievers

### LlamaIndexRetriever

>It is used for the question-answering with sources over an LlamaIndex data structure.
```python
from langchain_community.retrievers.llama_index import LlamaIndexRetriever
```

### LlamaIndexGraphRetriever

>It is used for question-answering with sources over an LlamaIndex graph data structure.
```python
from langchain_community.retrievers.llama_index import LlamaIndexGraphRetriever
```
24 changes: 24 additions & 0 deletions docs/docs/integrations/providers/llamaedge.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# LlamaEdge

>[LlamaEdge](https://llamaedge.com/docs/intro/) is the easiest & fastest way to run customized
> and fine-tuned LLMs locally or on the edge.
>
>* Lightweight inference apps. `LlamaEdge` is in MBs instead of GBs
>* Native and GPU accelerated performance
>* Supports many GPU and hardware accelerators
>* Supports many optimized inference libraries
>* Wide selection of AI / LLM models


## Installation and Setup

See the [installation instructions](https://llamaedge.com/docs/user-guide/quick-start-command).

## Chat models

See a [usage example](/docs/integrations/chat/llama_edge).

```python
from langchain_community.chat_models.llama_edge import LlamaEdgeChatService
```
31 changes: 31 additions & 0 deletions docs/docs/integrations/providers/llamafile.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# llamafile

>[llamafile](https://github.com/Mozilla-Ocho/llamafile) lets you distribute and run LLMs
> with a single file.
>`llamafile` makes open LLMs much more accessible to both developers and end users.
> `llamafile` is doing that by combining [llama.cpp](https://github.com/ggerganov/llama.cpp) with
> [Cosmopolitan Libc](https://github.com/jart/cosmopolitan) into one framework that collapses
> all the complexity of LLMs down to a single-file executable (called a "llamafile")
> that runs locally on most computers, with no installation.

## Installation and Setup

See the [installation instructions](https://github.com/Mozilla-Ocho/llamafile?tab=readme-ov-file#quickstart).

## LLMs

See a [usage example](/docs/integrations/llms/llamafile).

```python
from langchain_community.llms.llamafile import Llamafile
```

## Embedding models

See a [usage example](/docs/integrations/text_embedding/llamafile).

```python
from langchain_community.embeddings import LlamafileEmbeddings
```

0 comments on commit 60f30fd

Please sign in to comment.