-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The python logging module no longer seems to work properly inside a notebook #1397
Comments
We use logging in the kernel itself as well, and that shouldn't be redirected into the notebook. I was assuming that we set up a handler on the root logger, but looking at it, we seem to be correctly setting it on the application logger. So it should still be possible to handle other logs separately. I'm not sure what the root issue is here. |
The root issue is that the example code doesn't run as expected in the notebook. To reproduce the issue, run the above sample code inside of a notebook. Maybe when the notebook starts capturing output it should swap the default logging |
@takluyver Oh I see what you mean, the code in import logging
root_logger = logging.getLogger()
print 'root_logger =', root_logger
print 'handlers:', root_logger.handlers
handler = root_logger.handlers[0]
print 'handler stream:', handler.stream
import sys
print 'sys.stderr:', sys.stderr
|
Right, I don't know what's creating the root handler, though. |
Turns out I was looking at the wrong thing. I put a traceback in the logger constructor, and did see jupyter creating a logger via the traitletscode. But I missed the fact that I had another logger being created, which was actually the root logger. The root logger is actually created by tornado, here:
The actual method does this: def _setup_logging(self):
if not any([logging.getLogger().handlers,
logging.getLogger('tornado').handlers,
logging.getLogger('tornado.application').handlers]):
logging.basicConfig() https://github.com/tornadoweb/tornado/blob/stable/tornado/ioloop.py#L360 So if jupyter were to set up the root logger in each subprocess before tornado picks this up, everything would be fine. @takluyver what do you think? |
As far as I can tell, jupyter spawns a subprocess for each open notebook. This subprocess then spawns a thread for tornado. If so, then all we have to do is configure the root logger with the proper captured sys.stderr and all will be well. |
Each notebook is talking to a separate kernel process that runs your code. The tornado event loop is running in the main thread of that to handle messages (this is separate to the tornado webserver that runs the notebook application). I think probably the way to go is to configure a handler for the 'tornado' logger, and leave the root handler unconfigured. |
@takluyver Yes, but this is not the notebook server's tornado that sets up the logger. |
So there's the This method calls, in this order:
The easy fix would be to just move the call to |
init_io must be called after init_sockets because it requires the sockets to have been created. |
I think we just need to set up a handler for the 'tornado' logger before starting the event loop. Then, from the code you posted above, tornado will detect that and not call |
@minrk Good point The ZMQ IOLoop (which creates the tornado IO loop) is created here: Would that also be the right place to call some new Then |
I think setting up logging handlers is an application-level concern, so it should be done on the kernel application class - we just need to ensure it happens early enough. |
@takluyver the I'm not very familiar with the code there, so I'm looking for guidance on where to actually place it. |
Fixed in ipython/ipykernel#127 |
@tudorprodan I tried reload(logging) which works for me, see it here http://stackoverflow.com/a/21475297 |
logging.shutdown() is also not working as expected in Python 3.7 . It neither error out nor stopping logger modules. |
Had this issue in danijar/handout#11. IPython was closing handlers after running the script, while Spyder was accumulating handlers on each run. |
When running the example from the python logging documentation page in a jupyter notebook, the messages go to the notebook console as opposed to the notebook's webpage output via the capture mechanism. Just create an empty notebook, run the example below, you'll see what I mean.
The logging module by default creates a
StreamHandler
object which willif not stream: stream = sys.stderr
. A lot of modules use the logging module as in the example.The problem is that the notebook actually inits the logging module before the output capture mechanism is set, so the default created
StreamHandler
actually has a reference to the originalsys.stdout
, before the notebook replaces it with it's ownipykernel.iostream.OutStream
.Shouldn't the notebook reset the logging module?
I've been looking into this because after an upgrade I noticed missing output from some stuff I was running often.
The text was updated successfully, but these errors were encountered: