Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: GPT35Generator #5714

Merged
merged 44 commits into from
Sep 7, 2023
Merged
Changes from 3 commits
Commits
Show all changes
44 commits
Select commit Hold shift + click to select a range
a243cae
chatgpt backend
ZanSara Sep 4, 2023
f59abe8
fix tests
ZanSara Sep 4, 2023
5f70a65
reno
ZanSara Sep 4, 2023
ffb1a8f
remove print
ZanSara Sep 4, 2023
853f29d
helpers tests
ZanSara Sep 4, 2023
f0c5a8d
add chatgpt generator
ZanSara Sep 4, 2023
0a25414
use openai sdk
ZanSara Sep 4, 2023
5105ae8
remove backend
ZanSara Sep 4, 2023
7d0c8e6
tests are broken
ZanSara Sep 4, 2023
de46d10
fix tests
ZanSara Sep 5, 2023
28d83f4
stray param
ZanSara Sep 5, 2023
30b4bc3
move _check_troncated_answers into the class
ZanSara Sep 5, 2023
1b744e4
wrong import
ZanSara Sep 5, 2023
ab0e45c
rename function
ZanSara Sep 5, 2023
fc7dc05
typo in test
ZanSara Sep 5, 2023
3e43dcd
add openai deps
ZanSara Sep 5, 2023
c3381e3
mypy
ZanSara Sep 5, 2023
a204d14
Merge branch 'main' into chatgpt-llm-generator
ZanSara Sep 5, 2023
8d6f134
improve system prompt docstring
ZanSara Sep 5, 2023
8e0c1c6
Merge branch 'chatgpt-llm-generator' of github.com:deepset-ai/haystac…
ZanSara Sep 5, 2023
e1652f8
typos update
dfokina Sep 5, 2023
2a256b2
Update haystack/preview/components/generators/openai/chatgpt.py
ZanSara Sep 5, 2023
7178f23
pylint
ZanSara Sep 5, 2023
9eb7900
Merge branch 'chatgpt-llm-generator' of github.com:deepset-ai/haystac…
ZanSara Sep 5, 2023
13104de
Merge branch 'main' into chatgpt-llm-generator
ZanSara Sep 5, 2023
155485f
Update haystack/preview/components/generators/openai/chatgpt.py
ZanSara Sep 5, 2023
b2187c3
Update haystack/preview/components/generators/openai/chatgpt.py
ZanSara Sep 5, 2023
ed08e34
Update haystack/preview/components/generators/openai/chatgpt.py
ZanSara Sep 5, 2023
cc0bb7d
review feedback
ZanSara Sep 5, 2023
c58ab26
fix tests
ZanSara Sep 5, 2023
835fd0c
freview feedback
ZanSara Sep 5, 2023
0eb43f9
reno
ZanSara Sep 5, 2023
e8d92dd
remove tenacity mock
ZanSara Sep 6, 2023
0aeb875
gpt35generator
ZanSara Sep 6, 2023
9167e05
fix naming
ZanSara Sep 6, 2023
941cc66
remove stray references to chatgpt
ZanSara Sep 6, 2023
04ec229
fix e2e
ZanSara Sep 6, 2023
4eece1e
Merge branch 'main' into chatgpt-llm-generator
ZanSara Sep 6, 2023
8fb06ae
Update releasenotes/notes/chatgpt-llm-generator-d043532654efe684.yaml
ZanSara Sep 6, 2023
46385ac
add another test
ZanSara Sep 6, 2023
812e8b9
Merge branch 'main' into chatgpt-llm-generator
ZanSara Sep 6, 2023
3ca3f73
test wrong model name
ZanSara Sep 6, 2023
1015424
review feedback
ZanSara Sep 6, 2023
b79c7c1
Merge branch 'main' into chatgpt-llm-generator
ZanSara Sep 6, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions haystack/preview/components/generators/openai/chatgpt.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ def __init__(
:param model_name: The name of the model to use.
:param system_prompt: An additional message to be sent to the LLM at the beginning of each conversation.
Typically, a conversation is formatted with a system message first, followed by alternating messages from
the 'user' (the "quesries") and the 'assistant' (the "responses"). The system message helps set the behavior
the 'user' (the "queries") and the 'assistant' (the "responses"). The system message helps set the behavior
of the assistant. For example, you can modify the personality of the assistant or provide specific
instructions about how it should behave throughout the conversation.
:param streaming_callback: A callback function that is called when a new token is received from the stream.
Expand All @@ -68,7 +68,7 @@ def __init__(
[documentation](https://platform.openai.com/docs/api-reference/chat) for more details. Some of the supported
parameters:
- `max_tokens`: The maximum number of tokens the output text can have.
- `temperature`: What sampling temperature to use. Higher values means the model will take more risks.
- `temperature`: What sampling temperature to use. Higher values mean the model will take more risks.
Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer.
- `top_p`: An alternative to sampling with temperature, called nucleus sampling, where the model
considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens
Expand Down Expand Up @@ -158,7 +158,7 @@ def run(
:param model_name: The name of the model to use.
:param system_prompt: An additional message to be sent to the LLM at the beginning of each conversation.
Typically, a conversation is formatted with a system message first, followed by alternating messages from
the 'user' (the "quesries") and the 'assistant' (the "responses"). The system message helps set the behavior
the 'user' (the "queries") and the 'assistant' (the "responses"). The system message helps set the behavior
of the assistant. For example, you can modify the personality of the assistant or provide specific
instructions about how it should behave throughout the conversation.
:param streaming_callback: A callback function that is called when a new token is received from the stream.
Expand All @@ -170,7 +170,7 @@ def run(
[documentation](https://platform.openai.com/docs/api-reference/chat) for more details. Some of the supported
parameters:
- `max_tokens`: The maximum number of tokens the output text can have.
- `temperature`: What sampling temperature to use. Higher values means the model will take more risks.
- `temperature`: What sampling temperature to use. Higher values mean the model will take more risks.
Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer.
- `top_p`: An alternative to sampling with temperature, called nucleus sampling, where the model
considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens
Expand Down Expand Up @@ -260,7 +260,7 @@ def run(

def _check_truncated_answers(self, metadata: List[Dict[str, Any]]):
"""
Check the `finish_reason` the answers returned by OpenAI completions endpoint.
Check the `finish_reason` returned with the OpenAI completions.
If the `finish_reason` is `length`, log a warning to the user.

:param result: The result returned from the OpenAI API.
Expand Down