Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] Log Forwarding Support for python-SDK #311

Closed
samuelmakarovskiy opened this issue Apr 13, 2023 · 5 comments · Fixed by #413
Closed

[Feature Request] Log Forwarding Support for python-SDK #311

samuelmakarovskiy opened this issue Apr 13, 2023 · 5 comments · Fixed by #413
Labels
enhancement New feature or request

Comments

@samuelmakarovskiy
Copy link

samuelmakarovskiy commented Apr 13, 2023

Is your feature request related to a problem? Please describe.

Have noticed an interesting behavior that arose when all instances of a temporal server are brought down after a python worker has connected. The worker infinite retries polling to reconnect which is fine, but the real issue is that the console logs for the underlying rust core are not formatted like the python worker logic ones. See below:

�[2m2023-04-13T14:17:45.292663Z�[0m �[33m WARN�[0m �[1mtemporal_client::retry�[0m�[2m:�[0m gRPC call poll_workflow_task_queue retried 119 times �[3merror�[0m�[2m=�[0mStatus { code: Unavailable, message: "error trying to connect: dns error: failed to lookup address information: Name or service not known", source: Some(tonic::transport::Error(Transport, hyper::Error(Connect, ConnectError("dns error", Custom { kind: Uncategorized, error: "failed to lookup address information: Name or service not known" })))) }

Fixing this would allow me to also guarantee any core issues can be detected by monitoring unified logs and not just stdout

Describe the solution you'd like

Couple of things would work

  1. Could turn off pretty printing for the rust core logs so that at least I can make my application stdout logs match more reliably
  2. The sdk could support passing back core logs to python like it does for typescript (and maybe other sdks). I see it's not supported in the code, but maybe worth clarifying why?

Additional context

@samuelmakarovskiy samuelmakarovskiy added the enhancement New feature or request label Apr 13, 2023
@cretz
Copy link
Member

cretz commented Apr 13, 2023

Could turn off pretty printing for the rust core logs so that at least I can make my application stdout logs match more reliably

This can be disabled by setting RUST_LOG_COLORS=never env var I believe. Ref https://docs.rs/env_logger/latest/env_logger/#disabling-colors. (can probably also make it clear somehow on the tty that it does not support colors, e.g. TERM=dumb)

I see it's not supported in the code, but maybe worth clarifying why?

The only real reason is you're the first to need it :-) This is something we can add I think. I will leave this issue open on the backlog and we will look into implementing it.

@samuelmakarovskiy
Copy link
Author

This can be disabled by setting RUST_LOG_COLORS=never env var I believe. Ref https://docs.rs/env_logger/latest/env_logger/#disabling-colors. (can probably also make it clear somehow on the tty that it does not support colors, e.g. TERM=dumb)

Didn't seem to work, I think the rust core uses tokio-rs/tracing

And from this thread appears the config requires the filter to have set_ansi on it, defaults to true

@cretz
Copy link
Member

cretz commented Apr 13, 2023

I see! I opened temporalio/sdk-core#529 to let that be disabled at runtime.

@jmdacruz
Copy link
Contributor

jmdacruz commented Aug 25, 2023

Somewhat related to this topic, it would be nice to have more control on the structure of the Rust logging. One scenario is using structured (JSON) logging on the Python side, which currently gets mixed with the Rust log lines (as @samuelmakarovskiy shared on the description). Structured logging is desired in certain scenarios, like Kubernetes deployments where the infrastructure takes containers stdout/stderr and forwards that to some logging infrastructure.

Edit: @cretz clarified on Slack that "log forwarding" in this context means forwarding to the Python side, which would effectively solve the problem

cretz added a commit to cretz/temporal-sdk-python that referenced this issue Oct 24, 2023
@cretz cretz mentioned this issue Oct 24, 2023
@cretz
Copy link
Member

cretz commented Oct 31, 2023

After further discussion, we are going to wait on temporalio/sdk-core#618 and have buffered, pushed logs be sent to Python logging.

cretz added a commit to cretz/temporal-sdk-python that referenced this issue Nov 3, 2023
cretz added a commit to cretz/temporal-sdk-python that referenced this issue Nov 3, 2023
@cretz cretz closed this as completed in #413 Nov 6, 2023
cretz added a commit that referenced this issue Nov 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants