-
Notifications
You must be signed in to change notification settings - Fork 435
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
transports(services): disconnect client first #855
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
aconchillo
force-pushed
the
aleix/transport-services-disconnect-fixes
branch
from
December 13, 2024 01:46
edf15d9
to
7c516e0
Compare
markbackman
approved these changes
Dec 13, 2024
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
aconchillo
force-pushed
the
aleix/transport-services-disconnect-fixes
branch
from
December 13, 2024 03:08
7c516e0
to
3446bee
Compare
aconchillo
force-pushed
the
aleix/transport-services-disconnect-fixes
branch
from
December 13, 2024 03:10
3446bee
to
ccc9699
Compare
aconchillo
added a commit
that referenced
this pull request
Dec 18, 2024
This rollbacks a previous change #855 which was trying to fix an issue in the wrong way. The reasoning behind this fix is that the parent class might be sending audio or messages (through the subclass) and if we disconnect before all the data is sent we will run into incomplete audio or even errors. Therefore, we first make sure the parent tasks stop and then it will be safe to disconnect.
xingweitian
added a commit
to Forethought-AI-OSS/pipecat
that referenced
this pull request
Dec 22, 2024
* fixing [pipecat-ai#868] bug where deepgram client fails due to langauge * pyproject: update numpy, pydantic, loguru * update dev-requirements * pyproject: add audioop-lts for python 3.13 * fix ruff formatting * update CHANGELOG * Add custom assistant context aggregator for Grok due to content requirement in function calling * sentry: fix formatting * adding type check and value check * added change log * ruff change * fixing ruff issue * updating readme to support auto-formatting of ruff in pycharm * removing --config flag * README: added Emacs import re-organization with Ruff * examples(storytelling-chatbot): update dependencies * fix ruff linter import organization * github: run ruff check import linter * pyproject: use tool.ruff.lint sections * transports: call parent stop() before disconnecting This rollbacks a previous change pipecat-ai#855 which was trying to fix an issue in the wrong way. The reasoning behind this fix is that the parent class might be sending audio or messages (through the subclass) and if we disconnect before all the data is sent we will run into incomplete audio or even errors. Therefore, we first make sure the parent tasks stop and then it will be safe to disconnect. * processors(filters): allow passing EndFrame * pipeline(parallel): wait for slowest endframe If we are sending an EndFrame and a ParallelPipeline has multiple pipelines we want to wait before pushing the EndFrame downstream until the slowest pipeline is finished. Otherwise, we could be disconnecting from the transport too early. * audio(koala): add new audio filter KoalaFilter * Add TranscriptionProcessor * Update OpenAI's to_standard_messages to return the verboase message format * Add docstrings for Google and Anthropic's to_standard_messages and from_standard_message functions * Update OpenAI's from_standard_message to convert back to OpenAI's simple format * Add timestamp frames and include timestamps in the transcription event and frame * Add changelog and rename examples * Code review changes * TranscriptProcessor to handle simple and list content * Refactor TranscriptProcessor into user and assistant processors * Code review feedback * Add model parameter to OpenAI realtime service constructor, update default model * pyproject: update langchaing to 0.3.12 * Add CerebrasLLMService and foundational example * Tailor chat completion inputs to Cerebras API * feat: set auto_mode=true - ElevenLabs tts WSS * transport(base output): avoid pushing EndFrame twice * transports(daily): daily-python 0.14.0 (SIP transfer/refer, DTMF) * Make auto_mode an input parametere for ElevenLabsTTSService; add changelog entry * Update 11L default model, allow language to be used by more models * examples(01a): remove unused import * Add an RTVIProcessor to the simple-chatbot pipeline * Fix import order * Update PlayHT to use the latest Websocket connection endpoint * frame_processor: reset input queue flag with interruptions --------- Co-authored-by: Vaibhav159 <[email protected]> Co-authored-by: Aleix Conchillo Flaqué <[email protected]> Co-authored-by: Mark Backman <[email protected]> Co-authored-by: Louis Jordan <[email protected]> Co-authored-by: marcus-daily <[email protected]>
xingweitian
added a commit
to Forethought-AI-OSS/pipecat
that referenced
this pull request
Dec 28, 2024
* fixing [pipecat-ai#868] bug where deepgram client fails due to langauge * pyproject: update numpy, pydantic, loguru * update dev-requirements * pyproject: add audioop-lts for python 3.13 * fix ruff formatting * update CHANGELOG * Add custom assistant context aggregator for Grok due to content requirement in function calling * sentry: fix formatting * adding type check and value check * added change log * ruff change * fixing ruff issue * updating readme to support auto-formatting of ruff in pycharm * removing --config flag * README: added Emacs import re-organization with Ruff * examples(storytelling-chatbot): update dependencies * fix ruff linter import organization * github: run ruff check import linter * pyproject: use tool.ruff.lint sections * transports: call parent stop() before disconnecting This rollbacks a previous change pipecat-ai#855 which was trying to fix an issue in the wrong way. The reasoning behind this fix is that the parent class might be sending audio or messages (through the subclass) and if we disconnect before all the data is sent we will run into incomplete audio or even errors. Therefore, we first make sure the parent tasks stop and then it will be safe to disconnect. * processors(filters): allow passing EndFrame * pipeline(parallel): wait for slowest endframe If we are sending an EndFrame and a ParallelPipeline has multiple pipelines we want to wait before pushing the EndFrame downstream until the slowest pipeline is finished. Otherwise, we could be disconnecting from the transport too early. * audio(koala): add new audio filter KoalaFilter * Add TranscriptionProcessor * Update OpenAI's to_standard_messages to return the verboase message format * Add docstrings for Google and Anthropic's to_standard_messages and from_standard_message functions * Update OpenAI's from_standard_message to convert back to OpenAI's simple format * Add timestamp frames and include timestamps in the transcription event and frame * Add changelog and rename examples * Code review changes * TranscriptProcessor to handle simple and list content * Refactor TranscriptProcessor into user and assistant processors * Code review feedback * Add model parameter to OpenAI realtime service constructor, update default model * pyproject: update langchaing to 0.3.12 * Add CerebrasLLMService and foundational example * Tailor chat completion inputs to Cerebras API * feat: set auto_mode=true - ElevenLabs tts WSS * transport(base output): avoid pushing EndFrame twice * transports(daily): daily-python 0.14.0 (SIP transfer/refer, DTMF) * Make auto_mode an input parametere for ElevenLabsTTSService; add changelog entry * Update 11L default model, allow language to be used by more models * examples(01a): remove unused import * Add an RTVIProcessor to the simple-chatbot pipeline * Fix import order * Update PlayHT to use the latest Websocket connection endpoint * frame_processor: reset input queue flag with interruptions * Add Fish Audio TTS service * Flush the audio * Add Fish to the README * Fix metrics calculations * Add the ability to send_prebuilt_chat_message when using the DailyTransport * delay gemini multimodal live websocket connect * fixes to audio buffer * function calling dead-end * working but needs cleanup * still some cleanup to do * feature complete gemini audio, transcription, and phrase endpointing demo * pyproject: update daily-python to 0.14.2 * transports(base_output): fix duplicate push_frame() * remove stray line * revert elevenlabs example changes * update CHANGELOG * update README * update README * update CHANGELOG for 0.0.52 --------- Co-authored-by: Vaibhav159 <[email protected]> Co-authored-by: Aleix Conchillo Flaqué <[email protected]> Co-authored-by: Mark Backman <[email protected]> Co-authored-by: Louis Jordan <[email protected]> Co-authored-by: marcus-daily <[email protected]> Co-authored-by: Kwindla Hultman Kramer <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Please describe the changes in your PR. If it is addressing an issue, please reference that as well.
Make sure we disconnect from the client before we tear down the tasks.