Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix CI and to_dict tests #12

Open
wants to merge 12 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/e2e_preview.yml
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ jobs:
sudo apt install ffmpeg # for local Whisper tests

- name: Install Haystack
run: pip install .[dev,preview,audio] langdetect transformers[torch,sentencepiece]==4.35.2 'sentence-transformers>=2.2.0' pypdf tika 'azure-ai-formrecognizer>=3.2.0b2'
run: pip install .[dev,preview,audio] langdetect transformers[torch,sentencepiece]==4.35.2 'sentence-transformers>=2.2.0' pypdf tika 'azure-ai-formrecognizer>=3.2.0b2' gradientai

- name: Run tests
run: pytest e2e/preview
4 changes: 2 additions & 2 deletions .github/workflows/linting_preview.yml
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ jobs:
python-version: ${{ env.PYTHON_VERSION }}

- name: Install Haystack
run: pip install .[dev,preview,audio] langdetect transformers[torch,sentencepiece]==4.35.2 'sentence-transformers>=2.2.0' pypdf tika 'azure-ai-formrecognizer>=3.2.0b2'
run: pip install .[dev,preview,audio] langdetect transformers[torch,sentencepiece]==4.35.2 'sentence-transformers>=2.2.0' pypdf tika 'azure-ai-formrecognizer>=3.2.0b2' gradientai

- name: Mypy
if: steps.files.outputs.any_changed == 'true'
Expand Down Expand Up @@ -69,7 +69,7 @@ jobs:

- name: Install Haystack
run: |
pip install .[dev,preview,audio] langdetect transformers[torch,sentencepiece]==4.35.2 'sentence-transformers>=2.2.0' pypdf markdown-it-py mdit_plain tika 'azure-ai-formrecognizer>=3.2.0b2'
pip install .[dev,preview,audio] langdetect transformers[torch,sentencepiece]==4.35.2 'sentence-transformers>=2.2.0' pypdf markdown-it-py mdit_plain tika 'azure-ai-formrecognizer>=3.2.0b2' gradientai
pip install ./haystack-linter

- name: Pylint
Expand Down
8 changes: 4 additions & 4 deletions .github/workflows/tests_preview.yml
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ jobs:
python-version: ${{ env.PYTHON_VERSION }}

- name: Install Haystack
run: pip install .[dev,preview,audio] langdetect transformers[torch,sentencepiece]==4.35.2 'sentence-transformers>=2.2.0' pypdf markdown-it-py mdit_plain tika 'azure-ai-formrecognizer>=3.2.0b2'
run: pip install .[dev,preview,audio] langdetect transformers[torch,sentencepiece]==4.35.2 'sentence-transformers>=2.2.0' pypdf markdown-it-py mdit_plain tika 'azure-ai-formrecognizer>=3.2.0b2' gradientai

- name: Run
run: pytest -m "not integration" test/preview
Expand Down Expand Up @@ -174,7 +174,7 @@ jobs:
sudo apt install ffmpeg # for local Whisper tests

- name: Install Haystack
run: pip install .[dev,preview,audio] langdetect transformers[torch,sentencepiece]==4.35.2 'sentence-transformers>=2.2.0' pypdf markdown-it-py mdit_plain tika 'azure-ai-formrecognizer>=3.2.0b2'
run: pip install .[dev,preview,audio] langdetect transformers[torch,sentencepiece]==4.35.2 'sentence-transformers>=2.2.0' pypdf markdown-it-py mdit_plain tika 'azure-ai-formrecognizer>=3.2.0b2' gradientai

- name: Run
run: pytest --maxfail=5 -m "integration" test/preview
Expand Down Expand Up @@ -230,7 +230,7 @@ jobs:
colima start

- name: Install Haystack
run: pip install .[dev,preview,audio] langdetect transformers[torch,sentencepiece]==4.35.2 'sentence-transformers>=2.2.0' pypdf markdown-it-py mdit_plain tika 'azure-ai-formrecognizer>=3.2.0b2'
run: pip install .[dev,preview,audio] langdetect transformers[torch,sentencepiece]==4.35.2 'sentence-transformers>=2.2.0' pypdf markdown-it-py mdit_plain tika 'azure-ai-formrecognizer>=3.2.0b2' gradientai

- name: Run Tika
run: docker run -d -p 9998:9998 apache/tika:2.9.0.0
Expand Down Expand Up @@ -281,7 +281,7 @@ jobs:
python-version: ${{ env.PYTHON_VERSION }}

- name: Install Haystack
run: pip install .[dev,preview,audio] langdetect transformers[torch,sentencepiece]==4.35.2 'sentence-transformers>=2.2.0' pypdf markdown-it-py mdit_plain tika 'azure-ai-formrecognizer>=3.2.0b2'
run: pip install .[dev,preview,audio] langdetect transformers[torch,sentencepiece]==4.35.2 'sentence-transformers>=2.2.0' pypdf markdown-it-py mdit_plain tika 'azure-ai-formrecognizer>=3.2.0b2' gradientai

- name: Run
run: pytest --maxfail=5 -m "integration" test/preview -k 'not tika'
Expand Down
92 changes: 92 additions & 0 deletions e2e/preview/pipelines/test_gradient_rag_pipelines.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
import os
import json
import pytest

from haystack.preview import Pipeline, Document
from haystack.preview.components.embedders.gradient_document_embedder import GradientDocumentEmbedder
from haystack.preview.components.embedders.gradient_text_embedder import GradientTextEmbedder
from haystack.preview.document_stores import InMemoryDocumentStore
from haystack.preview.components.writers import DocumentWriter
from haystack.preview.components.retrievers import InMemoryEmbeddingRetriever
from haystack.preview.components.generators.gradient.base import GradientGenerator
from haystack.preview.components.builders.answer_builder import AnswerBuilder
from haystack.preview.components.builders.prompt_builder import PromptBuilder


@pytest.mark.skipif(
not os.environ.get("GRADIENT_ACCESS_TOKEN", None) or not os.environ.get("GRADIENT_WORKSPACE_ID", None),
reason="Export env variables called GRADIENT_ACCESS_TOKEN and GRADIENT_WORKSPACE_ID containing the Gradient configuration settings to run this test.",
)
def test_gradient_embedding_retrieval_rag_pipeline(tmp_path):
# Create the RAG pipeline
prompt_template = """
Given these documents, answer the question.\nDocuments:
{% for doc in documents %}
{{ doc.content }}
{% endfor %}

\nQuestion: {{question}}
\nAnswer:
"""

gradient_access_token = os.environ.get("GRADIENT_ACCESS_TOKEN")
rag_pipeline = Pipeline()
embedder = GradientTextEmbedder(access_token=gradient_access_token)
rag_pipeline.add_component(instance=embedder, name="text_embedder")
rag_pipeline.add_component(
instance=InMemoryEmbeddingRetriever(document_store=InMemoryDocumentStore()), name="retriever"
)
rag_pipeline.add_component(instance=PromptBuilder(template=prompt_template), name="prompt_builder")
rag_pipeline.add_component(
instance=GradientGenerator(access_token=gradient_access_token, base_model_slug="llama2-7b-chat"), name="llm"
)
rag_pipeline.add_component(instance=AnswerBuilder(), name="answer_builder")
rag_pipeline.connect("text_embedder", "retriever")
rag_pipeline.connect("retriever", "prompt_builder.documents")
rag_pipeline.connect("prompt_builder", "llm")
rag_pipeline.connect("llm.replies", "answer_builder.replies")
rag_pipeline.connect("retriever", "answer_builder.documents")

# Draw the pipeline
rag_pipeline.draw(tmp_path / "test_gradient_embedding_rag_pipeline.png")

# Serialize the pipeline to JSON
with open(tmp_path / "test_bm25_rag_pipeline.json", "w") as f:
json.dump(rag_pipeline.to_dict(), f)

# Load the pipeline back
with open(tmp_path / "test_bm25_rag_pipeline.json", "r") as f:
rag_pipeline = Pipeline.from_dict(json.load(f))

# Populate the document store
documents = [
Document(content="My name is Jean and I live in Paris."),
Document(content="My name is Mark and I live in Berlin."),
Document(content="My name is Giorgio and I live in Rome."),
]
document_store = rag_pipeline.get_component("retriever").document_store
indexing_pipeline = Pipeline()
indexing_pipeline.add_component(instance=GradientDocumentEmbedder(), name="document_embedder")
indexing_pipeline.add_component(instance=DocumentWriter(document_store=document_store), name="document_writer")
indexing_pipeline.connect("document_embedder", "document_writer")
indexing_pipeline.run({"document_embedder": {"documents": documents}})

# Query and assert
questions = ["Who lives in Paris?", "Who lives in Berlin?", "Who lives in Rome?"]
answers_spywords = ["Jean", "Mark", "Giorgio"]

for question, spyword in zip(questions, answers_spywords):
result = rag_pipeline.run(
{
"text_embedder": {"text": question},
"prompt_builder": {"question": question},
"answer_builder": {"query": question},
}
)

assert len(result["answer_builder"]["answers"]) == 1
generated_answer = result["answer_builder"]["answers"][0]
assert spyword in generated_answer.data
assert generated_answer.query == question
assert hasattr(generated_answer, "documents")
assert hasattr(generated_answer, "metadata")
5 changes: 5 additions & 0 deletions haystack/preview/components/embedders/__init__.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
from haystack.preview.components.embedders.gradient_text_embedder import GradientTextEmbedder
from haystack.preview.components.embedders.gradient_document_embedder import GradientDocumentEmbedder

from haystack.preview.components.embedders.sentence_transformers_text_embedder import SentenceTransformersTextEmbedder
from haystack.preview.components.embedders.sentence_transformers_document_embedder import (
SentenceTransformersDocumentEmbedder,
Expand All @@ -6,6 +9,8 @@
from haystack.preview.components.embedders.openai_text_embedder import OpenAITextEmbedder

__all__ = [
"GradientTextEmbedder",
"GradientDocumentEmbedder",
"SentenceTransformersTextEmbedder",
"SentenceTransformersDocumentEmbedder",
"OpenAITextEmbedder",
Expand Down
112 changes: 112 additions & 0 deletions haystack/preview/components/embedders/gradient_document_embedder.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,112 @@
import logging
from typing import List, Optional, Dict, Any

from haystack.preview import component, Document, default_to_dict
from haystack.preview.lazy_imports import LazyImport

with LazyImport(message="Run 'pip install gradientai'") as gradientai_import:
from gradientai import Gradient

logger = logging.getLogger(__name__)


@component
class GradientDocumentEmbedder:
"""
A component for computing Document embeddings using Gradient AI API..
The embedding of each Document is stored in the `embedding` field of the Document.

```python
embedder = GradientDocumentEmbedder(
access_token=gradient_access_token,
workspace_id=gradient_workspace_id,
model_name="bge_large"))
p = Pipeline()
p.add_component(embedder, name="document_embedder")
p.add_component(instance=GradientDocumentEmbedder(
p.add_component(instance=DocumentWriter(document_store=InMemoryDocumentStore()), name="document_writer")
p.connect("document_embedder", "document_writer")
p.run({"document_embedder": {"documents": documents}})
```
"""

def __init__(
self,
*,
model_name: str = "bge-large",
batch_size: int = 100,
access_token: Optional[str] = None,
workspace_id: Optional[str] = None,
host: Optional[str] = None,
) -> None:
"""
Create a GradientDocumentEmbedder component.

:param model_name: The name of the model to use.
:param access_token: The Gradient access token. If not provided it's read from the environment
variable GRADIENT_ACCESS_TOKEN.
:param workspace_id: The Gradient workspace ID. If not provided it's read from the environment
variable GRADIENT_WORKSPACE_ID.
:param host: The Gradient host. By default it uses https://api.gradient.ai/.
"""
gradientai_import.check()
self._batch_size = batch_size
self._host = host
self._model_name = model_name

self._gradient = Gradient(access_token=access_token, host=host, workspace_id=workspace_id)

def _get_telemetry_data(self) -> Dict[str, Any]:
"""
Data that is sent to Posthog for usage analytics.
"""
return {"model": self._model_name}

def to_dict(self) -> dict:
"""
Serialize the component to a Python dictionary.
"""
return default_to_dict(self, workspace_id=self._gradient.workspace_id, model_name=self._model_name)

def warm_up(self) -> None:
"""
Load the embedding model.
"""
if not hasattr(self, "_embedding_model"):
self._embedding_model = self._gradient.get_embeddings_model(slug=self._model_name)

def _generate_embeddings(self, documents: List[Document], batch_size: int) -> List[List[float]]:
"""
Batches the documents and generates the embeddings.
"""
batches = [documents[i : i + batch_size] for i in range(0, len(documents), batch_size)]

embeddings = []
for batch in batches:
response = self._embedding_model.generate_embeddings(inputs=[{"input": doc.content} for doc in batch])
embeddings.extend([e.embedding for e in response.embeddings])

return embeddings

@component.output_types(documents=List[Document])
def run(self, documents: List[Document]):
"""
Embed a list of Documents.
The embedding of each Document is stored in the `embedding` field of the Document.

:param documents: A list of Documents to embed.
"""
if not isinstance(documents, list) or documents and any(not isinstance(doc, Document) for doc in documents):
raise TypeError(
"GradientDocumentEmbedder expects a list of Documents as input."
"In case you want to embed a list of strings, please use the GradientTextEmbedder."
)

if not hasattr(self, "_embedding_model"):
raise RuntimeError("The embedding model has not been loaded. Please call warm_up() before running.")

embeddings = self._generate_embeddings(documents=documents, batch_size=self._batch_size)
for doc, embedding in zip(documents, embeddings):
doc.embedding = embedding

return {"documents": documents}
88 changes: 88 additions & 0 deletions haystack/preview/components/embedders/gradient_text_embedder.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
from typing import Any, Dict, List, Optional

from haystack.preview import component, default_to_dict
from haystack.preview.lazy_imports import LazyImport

with LazyImport(message="Run 'pip install gradientai'") as gradientai_import:
from gradientai import Gradient


@component
class GradientTextEmbedder:
"""
A component for embedding strings using models hosted on Gradient AI (https://gradient.ai).

```python
embedder = GradientTextEmbedder(
access_token=gradient_access_token,
workspace_id=gradient_workspace_id,
model_name="bge_large")
p = Pipeline()
p.add_component(instance=embedder, name="text_embedder")
p.add_component(instance=InMemoryEmbeddingRetriever(document_store=InMemoryDocumentStore()), name="retriever")
p.connect("text_embedder", "retriever")
p.run("embed me!!!")
```
"""

def __init__(
self,
*,
model_name: str = "bge-large",
access_token: Optional[str] = None,
workspace_id: Optional[str] = None,
host: Optional[str] = None,
) -> None:
"""
Create a GradientTextEmbedder component.

:param model_name: The name of the model to use.
:param access_token: The Gradient access token. If not provided it's read from the environment
variable GRADIENT_ACCESS_TOKEN.
:param workspace_id: The Gradient workspace ID. If not provided it's read from the environment
variable GRADIENT_WORKSPACE_ID.
:param host: The Gradient host. By default it uses https://api.gradient.ai/.
"""
gradientai_import.check()
self._host = host
self._model_name = model_name

self._gradient = Gradient(access_token=access_token, host=host, workspace_id=workspace_id)

def _get_telemetry_data(self) -> Dict[str, Any]:
"""
Data that is sent to Posthog for usage analytics.
"""
return {"model": self._model_name}

def to_dict(self) -> dict:
"""
Serialize the component to a Python dictionary.
"""
return default_to_dict(self, workspace_id=self._gradient.workspace_id, model_name=self._model_name)

def warm_up(self) -> None:
"""
Load the embedding model.
"""
if not hasattr(self, "_embedding_model"):
self._embedding_model = self._gradient.get_embeddings_model(slug=self._model_name)

@component.output_types(embedding=List[float])
def run(self, text: str):
"""Generates an embedding for a single text."""
if not isinstance(text, str):
raise TypeError(
"GradientTextEmbedder expects a string as an input."
"In case you want to embed a list of Documents, please use the GradientDocumentEmbedder."
)

if not hasattr(self, "_embedding_model"):
raise RuntimeError("The embedding model has not been loaded. Please call warm_up() before running.")

result = self._embedding_model.generate_embeddings(inputs=[{"input": text}])

if (not result) or (result.embeddings is None) or (len(result.embeddings) == 0):
raise RuntimeError("The embedding model did not return any embeddings.")

return {"embedding": result.embeddings[0].embedding}
3 changes: 2 additions & 1 deletion haystack/preview/components/generators/__init__.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
from haystack.preview.components.generators.gradient.base import GradientGenerator
from haystack.preview.components.generators.hugging_face_local import HuggingFaceLocalGenerator
from haystack.preview.components.generators.hugging_face_tgi import HuggingFaceTGIGenerator
from haystack.preview.components.generators.openai import GPTGenerator

__all__ = ["HuggingFaceLocalGenerator", "HuggingFaceTGIGenerator", "GPTGenerator"]
__all__ = ["GPTGenerator", "GradientGenerator", "HuggingFaceTGIGenerator", "HuggingFaceLocalGenerator"]
Empty file.
Loading