Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docs: Update CohereChatGenerator docstrings #958

Merged
merged 6 commits into from
Aug 8, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -15,18 +15,16 @@
@component
class CohereChatGenerator:
"""
Enables text generation using Cohere's chat endpoint.
Completes chats using Cohere's models through Cohere `chat` endpoint.

This component is designed to inference Cohere's chat models.
You can customize how the text is generated by passing parameters to the
Cohere API through the `**generation_kwargs` parameter. You can do this when
initializing or running the component. Any parameter that works with
`cohere.Client.chat` will work here too.
For details, see [Cohere API](https://docs.cohere.com/reference/chat).

Users can pass any text generation parameters valid for the `cohere.Client,chat` method
directly to this component via the `**generation_kwargs` parameter in __init__ or the `**generation_kwargs`
parameter in `run` method.
### Usage example

Invocations are made using 'cohere' package.
See [Cohere API](https://docs.cohere.com/reference/chat) for more details.

Example usage:
```python
from haystack_integrations.components.generators.cohere import CohereChatGenerator

Expand All @@ -49,32 +47,34 @@ def __init__(
"""
Initialize the CohereChatGenerator instance.

:param api_key: the API key for the Cohere API.
:param model: The name of the model to use. Available models are: [command, command-r, command-r-plus, etc.]
:param streaming_callback: a callback function to be called with the streaming response.
:param api_base_url: the base URL of the Cohere API.
:param generation_kwargs: additional model parameters. These will be used during generation. Refer to
https://docs.cohere.com/reference/chat for more details.
:param api_key: The API key for the Cohere API.
:param model: The name of the model to use. You can use models from the `command` family.
:param streaming_callback: A callback function that is called when a new token is received from the stream.
The callback function accepts [StreamingChunk](https://docs.haystack.deepset.ai/docs/data-classes#streamingchunk)
as an argument.
:param api_base_url: The base URL of the Cohere API.
:param generation_kwargs: Other parameters to use for the model during generation. For a list of parameters,
see [Cohere Chat endpoint](https://docs.cohere.com/reference/chat).
Some of the parameters are:
- 'chat_history': A list of previous messages between the user and the model, meant to give the model
conversational context for responding to the user's message.
- 'preamble_override': When specified, the default Cohere preamble will be replaced with the provided one.
- 'conversation_id': An alternative to chat_history. Previous conversations can be resumed by providing
the conversation's identifier. The contents of message and the model's response will be stored
as part of this conversation.If a conversation with this id does not already exist,
a new conversation will be created.
- 'prompt_truncation': Defaults to AUTO when connectors are specified and OFF in all other cases.
Dictates how the prompt will be constructed.
- 'connectors': Accepts {"id": "web-search"}, and/or the "id" for a custom connector, if you've created one.
When specified, the model's reply will be enriched with information found by
- 'preamble': When specified, replaces the default Cohere preamble with the provided one.
- 'conversation_id': An alternative to `chat_history`. Previous conversations can be resumed by providing
the conversation's identifier. The contents of message and the model's response are stored
as part of this conversation. If a conversation with this ID doesn't exist,
a new conversation is created.
- 'prompt_truncation': Defaults to `AUTO` when connectors are specified and to `OFF` in all other cases.
Dictates how the prompt is constructed.
- 'connectors': Accepts {"id": "web-search"}, and the "id" for a custom connector, if you created one.
When specified, the model's reply is enriched with information found by
quering each of the connectors (RAG).
- 'documents': A list of relevant documents that the model can use to enrich its reply.
- 'search_queries_only': Defaults to false. When true, the response will only contain a
list of generated search queries, but no search will take place, and no reply from the model to the
user's message will be generated.
- 'citation_quality': Defaults to "accurate". Dictates the approach taken to generating citations
- 'search_queries_only': Defaults to `False`. When `True`, the response only contains a
list of generated search queries, but no search takes place, and no reply from the model to the
user's message is generated.
- 'citation_quality': Defaults to `accurate`. Dictates the approach taken to generating citations
as part of the RAG flow by allowing the user to specify whether they want
"accurate" results or "fast" results.
`accurate` results or `fast` results.
- 'temperature': A non-negative float that tunes the degree of randomness in generation. Lower temperatures
mean less random generations.
"""
Expand Down