Skip to content

Commit

Permalink
Update URLs (#206)
Browse files Browse the repository at this point in the history
* update URLs

* make 2.0 URLs future-proof

* fix typos (#207)
  • Loading branch information
anakin87 authored Mar 12, 2024
1 parent 2f88740 commit 0b68f6f
Show file tree
Hide file tree
Showing 34 changed files with 95 additions and 151 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ These integrations are maintained by their respective owner or authors. You can

## What are Haystack Integrations?

Haystack Integrations are a Document Store, Model Provider, Custom Component, Monitoring Tool or Evaluation Framework that are either external packages or additional technologies that can be used with Haystack. Some integrations may be maintained by the deepset team, others are community contributions owned by the authors of the integration. Read more about Haystack Integrations in [Introduction to Integrations](https://docs.haystack.deepset.ai/v2.0/docs/integrations).
Haystack Integrations are a Document Store, Model Provider, Custom Component, Monitoring Tool or Evaluation Framework that are either external packages or additional technologies that can be used with Haystack. Some integrations may be maintained by the deepset team, others are community contributions owned by the authors of the integration. Read more about Haystack Integrations in [Introduction to Integrations](https://docs.haystack.deepset.ai/docs/integrations).

## Looking for prompts?

Expand Down
6 changes: 3 additions & 3 deletions integrations/amazon-bedrock.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,12 +38,12 @@ pip install amazon-bedrock-haystack

## Usage

Once installed, you will have access to [AmazonBedrockGenerator](https://docs.haystack.deepset.ai/v2.0/docs/amazonbedrockgenerator) and [AmazonBedrockChatGenerator](https://docs.haystack.deepset.ai/v2.0/docs/amazonbedrockchatgenerator) components that support generative language models on Amazon Bedrock.
You will also have access to the [AmazonBedrockTextEmbedder](https://docs.haystack.deepset.ai/v2.0/docs/amazonbedrocktextembedder) and [AmazonBedrockDocumentEmbedder](https://docs.haystack.deepset.ai/v2.0/docs/amazonbedrockdocumentembedder), which can be used to compute embeddings.
Once installed, you will have access to [AmazonBedrockGenerator](https://docs.haystack.deepset.ai/docs/amazonbedrockgenerator) and [AmazonBedrockChatGenerator](https://docs.haystack.deepset.ai/docs/amazonbedrockchatgenerator) components that support generative language models on Amazon Bedrock.
You will also have access to the [AmazonBedrockTextEmbedder](https://docs.haystack.deepset.ai/docs/amazonbedrocktextembedder) and [AmazonBedrockDocumentEmbedder](https://docs.haystack.deepset.ai/docs/amazonbedrockdocumentembedder), which can be used to compute embeddings.

### AmazonBedrockGenerator

To use this integration for text generation, initialize an `AmazonBedrockGenerator` with the model name, the AWS credentials (`AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_DEFAULT_REGION`) should be set as environment variables or passed as [Secret](https://docs.haystack.deepset.ai/v2.0/docs/secret-management) arguments.
To use this integration for text generation, initialize an `AmazonBedrockGenerator` with the model name, the AWS credentials (`AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_DEFAULT_REGION`) should be set as environment variables or passed as [Secret](https://docs.haystack.deepset.ai/docs/secret-management) arguments.
Note, make sure the region you set supports Amazon Bedrock.

Currently, the following models are supported:
Expand Down
4 changes: 2 additions & 2 deletions integrations/amazon-sagemaker.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ toc: true

[Amazon Sagemaker](https://docs.aws.amazon.com/sagemaker/latest/dg/whatis.html) is a comprehensive, fully managed machine learning service
that allows data scientists and developers to build, train, and deploy ML models efficiently. More information can be found on the
[documentation page](https://docs.haystack.deepset.ai/v2.0/docs/sagemakergenerator).
[documentation page](https://docs.haystack.deepset.ai/docs/sagemakergenerator).

## Haystack 2.x

Expand All @@ -44,7 +44,7 @@ pip install amazon-sagemaker-haystack

### Usage

Once installed, you will have access to a [SagemakerGenerator](https://docs.haystack.deepset.ai/v2.0/docs/sagemakergenerator) that supports models from various providers. To know more
Once installed, you will have access to a [SagemakerGenerator](https://docs.haystack.deepset.ai/docs/sagemakergenerator) that supports models from various providers. To know more
about which models are supported, check out [Sagemaker's documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/jumpstart-foundation-models.html).

To use this integration for text generation, initialize a `SagemakerGenerator` with the model name and aws credentials:
Expand Down
2 changes: 1 addition & 1 deletion integrations/anthropic.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ report_issue: https://github.com/deepset-ai/haystack/issues
logo: /logos/anthropic.png
---

You can use [Anhtropic Claude](https://docs.anthropic.com/claude/reference/getting-started-with-the-api) in your Haystack pipelines with the [PromptNode](https://docs.haystack.deepset.ai/docs/prompt_node#using-anthropic-generative-models), which can also be used with and [Agent](https://docs.haystack.deepset.ai/docs/agent).
You can use [Anhtropic Claude](https://docs.anthropic.com/claude/reference/getting-started-with-the-api) in your Haystack pipelines with the [PromptNode](https://docs.haystack.deepset.ai/v1.25/docs/prompt_node#using-anthropic-generative-models), which can also be used with and [Agent](https://docs.haystack.deepset.ai/v1.25/docs/agent).

## Installation

Expand Down
4 changes: 2 additions & 2 deletions integrations/astradb.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,8 +37,8 @@ This integration allows you to use AstraDB for document storage and retrieval in

## Components

- [`AstraDocumentStore`](https://docs.haystack.deepset.ai/v2.0/docs/astradocumentstore). This component serves as a persistent data store for your Haystack documents, and supports a number of embedding models and vector dimensions.
- [`AstraEmbeddingRetriever`](https://docs.haystack.deepset.ai/v2.0/docs/astraretriever) This is an embedding-based Retriever compatible with the Astra Document Store.
- [`AstraDocumentStore`](https://docs.haystack.deepset.ai/docs/astradocumentstore). This component serves as a persistent data store for your Haystack documents, and supports a number of embedding models and vector dimensions.
- [`AstraEmbeddingRetriever`](https://docs.haystack.deepset.ai/docs/astraretriever) This is an embedding-based Retriever compatible with the Astra Document Store.


## Initialization
Expand Down
14 changes: 7 additions & 7 deletions integrations/azure.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,16 +44,16 @@ To work with Azure components, you will need an Azure OpenAI API key, an [Azure

### Components

- [AzureOpenAIGenerator](https://docs.haystack.deepset.ai/v2.0/docs/azureopenaigenerator)
- [AzureOpenAIChatGenerator](https://docs.haystack.deepset.ai/v2.0/docs/azureopenaichatgenerator)
- [AzureOpenAITextEmbedder](https://docs.haystack.deepset.ai/v2.0/docs/azureopenaitextembedder)
- [AzureOpenAIDocumentEmbedder](https://docs.haystack.deepset.ai/v2.0/docs/azureopenaidocumentembedder)
- [AzureOpenAIGenerator](https://docs.haystack.deepset.ai/docs/azureopenaigenerator)
- [AzureOpenAIChatGenerator](https://docs.haystack.deepset.ai/docs/azureopenaichatgenerator)
- [AzureOpenAITextEmbedder](https://docs.haystack.deepset.ai/docs/azureopenaitextembedder)
- [AzureOpenAIDocumentEmbedder](https://docs.haystack.deepset.ai/docs/azureopenaidocumentembedder)

All components use `AZURE_OPENAI_API_KEY` and `AZURE_OPENAI_AD_TOKEN` environment variables by default. Otherwise, you can pass `api_key` and `azure_ad_token` at initialization using `Secret` class. Read more about [Secret Handling](https://docs.haystack.deepset.ai/v2.0/docs/secret-management#structured-secret-handling).
All components use `AZURE_OPENAI_API_KEY` and `AZURE_OPENAI_AD_TOKEN` environment variables by default. Otherwise, you can pass `api_key` and `azure_ad_token` at initialization using `Secret` class. Read more about [Secret Handling](https://docs.haystack.deepset.ai/docs/secret-management#structured-secret-handling).

### Embedding Models

You can leverage embedding models from Azure OpenAI through two components: [AzureOpenAITextEmbedder](https://docs.haystack.deepset.ai/v2.0/docs/azureopenaitextembedder) and [AzureOpenAIDocumentEmbedder](https://docs.haystack.deepset.ai/v2.0/docs/azureopenaidocumentembedder).
You can leverage embedding models from Azure OpenAI through two components: [AzureOpenAITextEmbedder](https://docs.haystack.deepset.ai/docs/azureopenaitextembedder) and [AzureOpenAIDocumentEmbedder](https://docs.haystack.deepset.ai/docs/azureopenaidocumentembedder).

To create semantic embeddings for documents, use `AzureOpenAIDocumentEmbedder` in your indexing pipeline. For generating embeddings for queries, use `AzureOpenAITextEmbedder`. Once you've selected the suitable component for your specific use case, initialize the component with required parameters.

Expand Down Expand Up @@ -84,7 +84,7 @@ indexing_pipeline.run({"embedder": {"documents": documents}})

### Generative Models (LLMs)

You can leverage Azure OpenAI models through two components: [AzureOpenAIGenerator](https://docs.haystack.deepset.ai/v2.0/docs/azureopenaigenerator) and [AzureOpenAIChatGenerator](https://docs.haystack.deepset.ai/v2.0/docs/azureopenaichatgenerator).
You can leverage Azure OpenAI models through two components: [AzureOpenAIGenerator](https://docs.haystack.deepset.ai/docs/azureopenaigenerator) and [AzureOpenAIChatGenerator](https://docs.haystack.deepset.ai/docs/azureopenaichatgenerator).

To use OpenAI models deployed through Azure services for text generation, initialize a `AzureOpenAIGenerator` with `azure_deployment` and `azure_endpoint`. You can then use the `AzureOpenAIGenerator` instance in a pipeline after the `PromptBuilder`.

Expand Down
10 changes: 5 additions & 5 deletions integrations/cohere.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ toc: true

## Haystack 2.0

You can use [Cohere Models](https://cohere.com/) in your Haystack 2.0 pipelines with the [Generators](https://docs.haystack.deepset.ai/v2.0/docs/generators) and [Embedders](https://docs.haystack.deepset.ai/v2.0/docs/embedders).
You can use [Cohere Models](https://cohere.com/) in your Haystack 2.0 pipelines with the [Generators](https://docs.haystack.deepset.ai/docs/generators) and [Embedders](https://docs.haystack.deepset.ai/docs/embedders).

### Installation

Expand All @@ -47,7 +47,7 @@ You can use Cohere models in various ways:

#### Embedding Models

You can leverage `/embed` models from Cohere through two components: [CohereTextEmbedder](https://docs.haystack.deepset.ai/v2.0/docs/coheretextembedder) and [CohereDocumentEmbedder](https://docs.haystack.deepset.ai/v2.0/docs/coheredocumentembedder). These components support both **Embed v2** and **Embed v3** models.
You can leverage `/embed` models from Cohere through two components: [CohereTextEmbedder](https://docs.haystack.deepset.ai/docs/coheretextembedder) and [CohereDocumentEmbedder](https://docs.haystack.deepset.ai/docs/coheredocumentembedder). These components support both **Embed v2** and **Embed v3** models.

To create semantic embeddings for documents, use `CohereDocumentEmbedder` in your indexing pipeline. For generating embeddings for queries, use `CohereTextEmbedder`. Once you've selected the suitable component for your specific use case, initialize the component with the model name. By default, the Cohere API key with be automatically read from either the `COHERE_API_KEY` environment variable or the `CO_API_KEY` environment variable.

Expand Down Expand Up @@ -76,7 +76,7 @@ indexing_pipeline.run({"embedder": {"documents": documents}})

#### Generative Models (LLMs)

To use `/generate` models from Cohere, initialize a [CohereGenerator](https://docs.haystack.deepset.ai/v2.0/docs/coheregenerator) with the model name. By default, the Cohere API key with be automatically read from either the `COHERE_API_KEY` environment variable or the `CO_API_KEY` environment variable. You can then use this `CohereGenerator` in a question answering pipeline after the `PromptBuilder`.
To use `/generate` models from Cohere, initialize a [CohereGenerator](https://docs.haystack.deepset.ai/docs/coheregenerator) with the model name. By default, the Cohere API key with be automatically read from either the `COHERE_API_KEY` environment variable or the `CO_API_KEY` environment variable. You can then use this `CohereGenerator` in a question answering pipeline after the `PromptBuilder`.

Below is the example of generative questions answering pipeline using RAG with `PromptBuilder` and `CohereGenerator`:

Expand Down Expand Up @@ -112,7 +112,7 @@ pipe.run({
})
```

Similar to the above example, you can also use [`CohereChatGenerator`](https://docs.haystack.deepset.ai/v2.0/docs/coherechatgenerator) to use Cohere `/chat` models and features (streaming, connectors) in your pipeline.
Similar to the above example, you can also use [`CohereChatGenerator`](https://docs.haystack.deepset.ai/docs/coherechatgenerator) to use Cohere `/chat` models and features (streaming, connectors) in your pipeline.

```python
from haystack import Pipeline
Expand All @@ -136,7 +136,7 @@ print(res)

## Haystack 1.x

You can use [Cohere Models](https://cohere.com/) in your Haystack pipelines with the [EmbeddingRetriever](https://docs.haystack.deepset.ai/docs/retriever#embedding-retrieval-recommended), [PromptNode](https://docs.haystack.deepset.ai/docs/prompt_node), and [CohereRanker](https://docs.haystack.deepset.ai/docs/ranker#cohereranker).
You can use [Cohere Models](https://cohere.com/) in your Haystack pipelines with the [EmbeddingRetriever](https://docs.haystack.deepset.ai/v1.25/docs/retriever#embedding-retrieval-recommended), [PromptNode](https://docs.haystack.deepset.ai/v1.25/docs/prompt_node), and [CohereRanker](https://docs.haystack.deepset.ai/v1.25/docs/ranker#cohereranker).

### Installation (1.x)

Expand Down
2 changes: 1 addition & 1 deletion integrations/context-ai.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ pipe.connect("prompt_builder.prompt", "llm.messages")
pipe.connect("prompt_builder.prompt", "prompt_analytics")
pipe.connect("llm.replies", "assistant_analytics")

# thread_id is unqiue to each conversation
# thread_id is unique to each conversation
context_parameters = {"thread_id": uuid.uuid4(), "metadata": {"model": model, "user_id": "1234"}}
location = "Berlin"
messages = [ChatMessage.from_system("Always respond in German even if some input data is in other languages."),
Expand Down
4 changes: 2 additions & 2 deletions integrations/deepeval.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ toc: true

## Overview

[DeepEval](https://github.com/confident-ai/deepeval) (by [Confident AI](https://www.confident-ai.com/)) is an open source framework for model-based evaluation to evaluate your LLM applications by quantifying their performance on aspects such as faithfulness, answer relevancy, contextual recall etc. More information can be found on the [documentation page](https://docs.haystack.deepset.ai/v2.0/docs/deepevalevaluator).
[DeepEval](https://github.com/confident-ai/deepeval) (by [Confident AI](https://www.confident-ai.com/)) is an open source framework for model-based evaluation to evaluate your LLM applications by quantifying their performance on aspects such as faithfulness, answer relevancy, contextual recall etc. More information can be found on the [documentation page](https://docs.haystack.deepset.ai/docs/deepevalevaluator).

## Installation

Expand All @@ -37,7 +37,7 @@ pip install deepeval-haystack

## Usage

Once installed, you will have access to a [DeepEvalEvaluator](https://docs.haystack.deepset.ai/v2.0/docs/deepevalevaluator) that supports a variety of model-based evaluation metrics:
Once installed, you will have access to a [DeepEvalEvaluator](https://docs.haystack.deepset.ai/docs/deepevalevaluator) that supports a variety of model-based evaluation metrics:
- Answer Relevancy
- Faithfulness
- Contextual Precision
Expand Down
Loading

0 comments on commit 0b68f6f

Please sign in to comment.