Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
  • Loading branch information
stellasia committed Sep 27, 2024
2 parents 360d6cc + 56bc660 commit 3791133
Show file tree
Hide file tree
Showing 19 changed files with 1,344 additions and 381 deletions.
15 changes: 11 additions & 4 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,21 +4,25 @@

### Added
- Added AzureOpenAILLM and AzureOpenAIEmbeddings to support Azure served OpenAI models
- Add `template` validation in `PromptTemplate` class upon construction.
- Added `template` validation in `PromptTemplate` class upon construction.
- `custom_prompt` arg is now converted to `Text2CypherTemplate` class within the `Text2CypherRetriever.get_search_results` method.
- `Text2CypherTemplate` and `RAGTemplate` prompt templates now require `query_text` arg and will error if it is not present. Previous `query_text` aliases may be used, but will warn of deprecation.
- Fix bug in `Text2CypherRetriever` using `custom_prompt` arg where the `search` method would not inject the `query_text` content.
- Add feature to include kwargs in `Text2CypherRetriever.search()` that will be injected into a custom prompt, if provided.
- Add validation to `custom_prompt` parameter of `Text2CypherRetriever` to ensure that `query_text` placeholder exists in prompt.
- Fixed bug in `Text2CypherRetriever` using `custom_prompt` arg where the `search` method would not inject the `query_text` content.
- Added feature to include kwargs in `Text2CypherRetriever.search()` that will be injected into a custom prompt, if provided.
- Added validation to `custom_prompt` parameter of `Text2CypherRetriever` to ensure that `query_text` placeholder exists in prompt.
- Introduced a fixed size text splitter component for splitting text into specified fixed size chunks with overlap. Updated examples and tests to utilize this new component.
- Introduced Vertex AI LLM class for integrating Vertex AI models.
- Added unit tests for the Vertex AI LLM class.
- Added support for Cohere LLM and embeddings - added optional dependency to `cohere`.
- Added support for Anthropic LLM - added optional dependency to `anthropic`.

### Fixed
- Resolved import issue with the Vertex AI Embeddings class.
- Resolved issue where Neo4jWriter component would raise an error if the start or end node ID was not defined properly in the input.

### Changed
- Moved the Embedder class to the neo4j_graphrag.embeddings directory for better organization alongside other custom embedders.
- Neo4jWriter component now runs a single query to merge node and set its embeddings if any.

## 0.6.3
### Changed
Expand All @@ -31,6 +35,9 @@
- Updated documentation to include OpenAI and Vertex AI embeddings classes.
- Added google-cloud-aiplatform as an optional dependency for Vertex AI embeddings.

### Fixed
- Make `pygraphviz` an optional dependency - it is now only required when calling `pipeline.draw`.

## 0.6.2

### Fixed
Expand Down
43 changes: 38 additions & 5 deletions docs/source/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -162,29 +162,62 @@ VertexAIEmbeddings
.. autoclass:: neo4j_graphrag.embeddings.vertexai.VertexAIEmbeddings
:members:

CohereEmbeddings
================

.. autoclass:: neo4j_graphrag.embeddings.cohere.CohereEmbeddings
:members:

**********
Generation
**********

LLM
===

LLMInterface
============
------------

.. autoclass:: neo4j_graphrag.llm.LLMInterface
:members:


OpenAILLM
=========
---------

.. autoclass:: neo4j_graphrag.llm.openai.OpenAILLM
.. autoclass:: neo4j_graphrag.llm.openai_llm.OpenAILLM
:members:
:undoc-members: get_messages, client_class, async_client_class


AzureOpenAILLM
--------------

.. autoclass:: neo4j_graphrag.llm.openai_llm.AzureOpenAILLM
:members:
:undoc-members: get_messages, client_class, async_client_class


VertexAILLM
===========
-----------

.. autoclass:: neo4j_graphrag.llm.vertexai_llm.VertexAILLM
:members:

AnthropicLLM
------------

.. autoclass:: neo4j_graphrag.llm.vertexai.VertexAILLM
.. autoclass:: neo4j_graphrag.llm.anthropic_llm.AnthropicLLM
:members:


CohereLLM
---------

.. autoclass:: neo4j_graphrag.llm.cohere_llm.CohereLLM
:members:


PromptTemplate
==============

Expand Down
124 changes: 111 additions & 13 deletions docs/source/user_guide_rag.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,9 +23,9 @@ In practice, it's done with only a few lines of code:
from neo4j import GraphDatabase
from neo4j_graphrag.retrievers import VectorRetriever
from neo4j_graphrag.llm.openai import OpenAILLM
from neo4j_graphrag.llm import OpenAILLM
from neo4j_graphrag.generation import GraphRAG
from neo4j_graphrag.embeddings.openai import OpenAIEmbeddings
from neo4j_graphrag.embeddings import OpenAIEmbeddings
# 1. Neo4j driver
URI = "neo4j://localhost:7687"
Expand Down Expand Up @@ -56,6 +56,12 @@ In practice, it's done with only a few lines of code:
print(response.answer)
.. note::

In order to run this code, the `openai` Python package needs to be installed:
`pip install openai`


The following sections provide more details about how to customize this code.

******************************
Expand All @@ -69,12 +75,15 @@ Using Another LLM Model

If OpenAI cannot be used directly, there are a few available alternatives:

- Use Azure OpenAI.
- Use Azure OpenAI (GPT...).
- Use Google VertexAI (Gemini...).
- Use Anthropic LLM (Claude...).
- Use Cohere.
- Use a local Ollama model.
- Implement a custom interface.
- Utilize any LangChain chat model.

All options are illustrated below, using a local Ollama model as an example.
All options are illustrated below.

Using Azure Open AI LLM
-----------------------
Expand All @@ -83,18 +92,102 @@ It is possible to use Azure OpenAI switching to the `AzureOpenAILLM` class:

.. code:: python
from neo4j_graphrag.llm.openai import AzureOpenAILLM
from neo4j_graphrag.llm import AzureOpenAILLM
llm = AzureOpenAILLM(
model_name="gpt-4o",
azure_endpoint="https://example-endpoint.openai.azure.com/", # update with your endpoint
api_version="2024-06-01", # update appropriate version
api_key="ba3a46d86259405385f73f08078f588b", # api_key is optional and can also be set with OPENAI_API_KEY env var
api_key="...", # api_key is optional and can also be set with OPENAI_API_KEY env var
)
llm.invoke("say something")
Check the OpenAI Python client [documentation](https://github.com/openai/openai-python?tab=readme-ov-file#microsoft-azure-openai)
to learn more about the configuration.

.. note::

In order to run this code, the `openai` Python package needs to be installed:
`pip install openai`


See :ref:`azureopenaillm`.


Using VertexAI LLM
------------------

To use VertexAI, instantiate the `VertexAILLM` class:

.. code:: python
from neo4j_graphrag.llm import VertexAILLM
from vertexai.generative_models import GenerationConfig
generation_config = GenerationConfig(temperature=0.0)
llm = VertexAILLM(
model_name="gemini-1.5-flash-001", generation_config=generation_config
)
llm.invoke("say something")
.. note::

In order to run this code, the `google-cloud-aiplatform` Python package needs to be installed:
`pip install google-cloud-aiplatform`


See :ref:`vertexaillm`.


Using Anthropic LLM
-------------------

To use Anthropic, instantiate the `AnthropicLLM` class:

.. code:: python
from neo4j_graphrag.llm import AnthropicLLM
llm = AnthropicLLM(
model_name="claude-3-opus-20240229",
model_params={"max_tokens": 1000}, # max_tokens must be specified
api_key=api_key, # can also set `ANTHROPIC_API_KEY` in env vars
)
llm.invoke("say something")
.. note::

In order to run this code, the `anthropic` Python package needs to be installed:
`pip install anthropic`

See :ref:`anthropicllm`.


Using Cohere LLM
----------------

To use Cohere, instantiate the `CohereLLM` class:

.. code:: python
from neo4j_graphrag.llm import CohereLLM
llm = CohereLLM(
model_name="command-r",
api_key=api_key, # can also set `CO_API_KEY` in env vars
)
llm.invoke("say something")
.. note::

In order to run this code, the `cohere` Python package needs to be installed:
`pip install cohere`


See :ref:`coherellm`.


Using a Local Model via Ollama
-------------------------------
Expand All @@ -105,7 +198,7 @@ it can be queried using the following:

.. code:: python
from neo4j_graphrag.llm.openai import OpenAILLM
from neo4j_graphrag.llm import OpenAILLM
llm = OpenAILLM(api_key="ollama", base_url="http://127.0.0.1:11434/v1", model_name="orca-mini")
llm.invoke("say something")
Expand Down Expand Up @@ -300,13 +393,18 @@ into a vector is required. Therefore, the retriever requires knowledge of an emb
Embedders
-----------------------------

Currently, this package supports two embedders: `OpenAIEmbeddings` and `SentenceTransformerEmbeddings`.
Currently, this package supports several embedders:
- `OpenAIEmbeddings`
- `AzureOpenAIEmbeddings`
- `VertexAIEmbeddings`
- `CohereEmbeddings`
- `SentenceTransformerEmbeddings`.

The `OpenAIEmbedder` was illustrated previously. Here is how to use the `SentenceTransformerEmbeddings`:

.. code:: python
from neo4j_graphrag.embeddings.sentence_transformers import SentenceTransformerEmbeddings
from neo4j_graphrag.embeddings import SentenceTransformerEmbeddings
embedder = SentenceTransformerEmbeddings(model="all-MiniLM-L6-v2") # Note: this is the default model
Expand All @@ -330,10 +428,10 @@ the following implementation of an embedder that wraps the `OllamaEmbedding` mod
return embedding[0]
ollama_embedding = OllamaEmbedding(
model_name="llama3",
base_url="http://localhost:11434",
ollama_additional_kwargs={"mirostat": 0},
)
model_name="llama3",
base_url="http://localhost:11434",
ollama_additional_kwargs={"mirostat": 0},
)
embedder = OllamaEmbedder(ollama_embedding)
vector = embedder.embed_query("some text")
Expand Down
Loading

0 comments on commit 3791133

Please sign in to comment.