Skip to content

Commit

Permalink
chore: update docstrings (#497)
Browse files Browse the repository at this point in the history
* update docstrings

* Apply suggestions from code review

Co-authored-by: Stefano Fiorucci <[email protected]>

* Apply suggestions from code review

* Final touch to chat_generator

---------

Co-authored-by: Stefano Fiorucci <[email protected]>
  • Loading branch information
masci and anakin87 authored Feb 28, 2024
1 parent f758de9 commit d31442d
Show file tree
Hide file tree
Showing 3 changed files with 71 additions and 56 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -43,17 +43,27 @@ def __init__(
embedding_separator: str = "\n",
):
"""
Create a MistralDocumentEmbedder component.
:param api_key: The Mistral API key.
:param model: The name of the model to use.
:param api_base_url: The Mistral API Base url, defaults to None. For more details, see Mistral [docs](https://docs.mistral.ai/api/).
:param prefix: A string to add to the beginning of each text.
:param suffix: A string to add to the end of each text.
:param batch_size: Number of Documents to encode at once.
:param progress_bar: Whether to show a progress bar or not. Can be helpful to disable in production deployments
to keep the logs clean.
:param meta_fields_to_embed: List of meta fields that should be embedded along with the Document text.
:param embedding_separator: Separator used to concatenate the meta fields to the Document text.
Creates a MistralDocumentEmbedder component.
:param api_key:
The Mistral API key.
:param model:
The name of the model to use.
:param api_base_url:
The Mistral API Base url. For more details, see Mistral [docs](https://docs.mistral.ai/api/).
:param prefix:
A string to add to the beginning of each text.
:param suffix:
A string to add to the end of each text.
:param batch_size:
Number of Documents to encode at once.
:param progress_bar:
Whether to show a progress bar or not. Can be helpful to disable in production deployments to keep
the logs clean.
:param meta_fields_to_embed:
List of meta fields that should be embedded along with the Document text.
:param embedding_separator:
Separator used to concatenate the meta fields to the Document text.
"""
super(MistralDocumentEmbedder, self).__init__( # noqa: UP008
api_key=api_key,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,22 +11,21 @@
@component
class MistralTextEmbedder(OpenAITextEmbedder):
"""
A component for embedding strings using Mistral models.
A component for embedding strings using Mistral models.
Usage example:
Usage example:
```python
from haystack_integrations.components.embedders.mistral.text_embedder import MistralTextEmbedder
text_to_embed = "I love pizza!"
text_to_embed = "I love pizza!"
text_embedder = MistralTextEmbedder()
print(text_embedder.run(text_to_embed))
text_embedder = MistralTextEmbedder()
print(text_embedder.run(text_to_embed))
# {'embedding': [0.017020374536514282, -0.023255806416273117, ...],
# 'meta': {'model': 'text-embedding-ada-002-v2',
# 'usage': {'prompt_tokens': 4, 'total_tokens': 4}}}
```
# output:
# {'embedding': [0.017020374536514282, -0.023255806416273117, ...],
# 'meta': {'model': 'mistral-embed',
# 'usage': {'prompt_tokens': 4, 'total_tokens': 4}}}
```
"""

def __init__(
Expand All @@ -38,14 +37,19 @@ def __init__(
suffix: str = "",
):
"""
Create an MistralTextEmbedder component.
:param api_key: The Misttal API key.
:param model: The name of the Mistral embedding models to be used.
:param api_base_url: The Mistral API Base url, defaults to `https://api.mistral.ai/v1`.
For more details, see Mistral [docs](https://docs.mistral.ai/api/).
:param prefix: A string to add to the beginning of each text.
:param suffix: A string to add to the end of each text.
Creates an MistralTextEmbedder component.
:param api_key:
The Mistral API key.
:param model:
The name of the Mistral embedding model to be used.
:param api_base_url:
The Mistral API Base url.
For more details, see Mistral [docs](https://docs.mistral.ai/api/).
:param prefix:
A string to add to the beginning of each text.
:param suffix:
A string to add to the end of each text.
"""
super(MistralTextEmbedder, self).__init__( # noqa: UP008
api_key=api_key,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,17 +12,27 @@
@component
class MistralChatGenerator(OpenAIChatGenerator):
"""
Enables text generation using Mistral's large language models (LLMs).
Currently supports `mistral-tiny`, `mistral-small` and `mistral-medium`
models accessed through the chat completions API endpoint.
Enables text generation using Mistral AI generative models.
For supported models, see [Mistral AI docs](https://docs.mistral.ai/platform/endpoints/#operation/listModels).
Users can pass any text generation parameters valid for the `openai.ChatCompletion.create` method
directly to this component via the `**generation_kwargs` parameter in __init__ or the `**generation_kwargs`
Users can pass any text generation parameters valid for the Mistral Chat Completion API
directly to this component via the `generation_kwargs` parameter in `__init__` or the `generation_kwargs`
parameter in `run` method.
Key Features and Compatibility:
- **Primary Compatibility**: Designed to work seamlessly with the Mistral API Chat Completion endpoint.
- **Streaming Support**: Supports streaming responses from the Mistral API Chat Completion endpoint.
- **Customizability**: Supports all parameters supported by the Mistral API Chat Completion endpoint.
This component uses the ChatMessage format for structuring both input and output,
ensuring coherent and contextually relevant responses in chat-based text generation scenarios.
Details on the ChatMessage format can be found in the
[Haystack docs](https://docs.haystack.deepset.ai/v2.0/docs/data-classes#chatmessage)
For more details on the parameters supported by the Mistral API, refer to the
[Mistral API Docs](https://docs.mistral.ai/api/).
Usage example:
```python
from haystack_integrations.components.generators.mistral import MistralChatGenerator
from haystack.dataclasses import ChatMessage
Expand All @@ -38,19 +48,7 @@ class MistralChatGenerator(OpenAIChatGenerator):
>>meaningful and useful.', role=<ChatRole.ASSISTANT: 'assistant'>, name=None,
>>meta={'model': 'mistral-tiny', 'index': 0, 'finish_reason': 'stop',
>>'usage': {'prompt_tokens': 15, 'completion_tokens': 36, 'total_tokens': 51}})]}
```
Key Features and Compatibility:
- **Primary Compatibility**: Designed to work seamlessly with the Mistral API Chat Completion endpoint.
- **Streaming Support**: Supports streaming responses from the Mistral API Chat Completion endpoint.
- **Customizability**: Supports all parameters supported by the Mistral API Chat Completion endpoint.
Input and Output Format:
- **ChatMessage Format**: This component uses the ChatMessage format for structuring both input and output,
ensuring coherent and contextually relevant responses in chat-based text generation scenarios.
Details on the ChatMessage format can be found at: https://github.com/openai/openai-python/blob/main/chatml.md.
Note that the Mistral API does not accept `system` messages yet. You can use `user` and `assistant` messages.
"""

def __init__(
Expand All @@ -65,15 +63,19 @@ def __init__(
Creates an instance of MistralChatGenerator. Unless specified otherwise in the `model`, this is for Mistral's
`mistral-tiny` model.
:param api_key: The Mistral API key.
:param model: The name of the Mistral chat completion model to use.
:param streaming_callback: A callback function that is called when a new token is received from the stream.
:param api_key:
The Mistral API key.
:param model:
The name of the Mistral chat completion model to use.
:param streaming_callback:
A callback function that is called when a new token is received from the stream.
The callback function accepts StreamingChunk as an argument.
:param api_base_url: The Mistral API Base url, defaults to `https://api.mistral.ai/v1`.
For more details, see Mistral [docs](https://docs.mistral.ai/api/).
:param generation_kwargs: Other parameters to use for the model. These parameters are all sent directly to
the Mistrak endpoint. See [Mistral API docs](https://docs.mistral.ai/api/t) for
more details.
:param api_base_url:
The Mistral API Base url.
For more details, see Mistral [docs](https://docs.mistral.ai/api/).
:param generation_kwargs:
Other parameters to use for the model. These parameters are all sent directly to
the Mistral endpoint. See [Mistral API docs](https://docs.mistral.ai/api/) for more details.
Some of the supported parameters:
- `max_tokens`: The maximum number of tokens the output text can have.
- `temperature`: What sampling temperature to use. Higher values mean the model will take more risks.
Expand All @@ -83,7 +85,6 @@ def __init__(
comprising the top 10% probability mass are considered.
- `stream`: Whether to stream back partial progress. If set, tokens will be sent as data-only server-sent
events as they become available, with the stream terminated by a data: [DONE] message.
- `stop`: One or more sequences after which the LLM should stop generating tokens.
- `safe_prompt`: Whether to inject a safety prompt before all conversations.
- `random_seed`: The seed to use for random sampling.
"""
Expand Down

0 comments on commit d31442d

Please sign in to comment.