-
Notifications
You must be signed in to change notification settings - Fork 122
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Recreate current rolling file immediately to log on file deleting #128
Comments
Do you mean you are manually deleting the log file during the period that Serilog would expect to be able to write to it? |
Yes, it's possible when you run your application in Docker container for example. |
Hmm, I'm not getting how running in a container deletes log files during writing - would appreciate if you could describe a scenario when this happens? |
No, 'running in container' does not delete log files during writing. |
OK, but why would you do that? 😄 I'm trying to figure out if this is a realistic, common thing to happen, or an edge case where you have to actively be trying to cause a problem for it to be an issue. Also, as an aside, if you configure the sink using |
I will give a try. But something tells me that this approach does not work.
Someone accidentally removes log file on the production... The data has been lost. At that point we have embarked investigation. |
I don't know about the intricacies of the Linux file system, but it absolutely works on Windows.
Just a little piece of unsolicited advice: being rude to people will not get you what you want from them, especially if they are maintaining OSS in their spare time, for free. If I'm being honest, I'm not really seeing how this could be s a realistic or common problem to be dealt with by a logging framework. That said, it would be nice if we didn't lose messages in the rare event that somebody explicitly deletes a file that wasn't locked. The most obvious ways to do this (e.g. checking if the file exists before every write) are likely to hurt performance badly, so we need to be careful about that. Do you have any suggestions for an approach? |
Hi all! @asset-2 thanks for raising this as a separate issue; I see it's related to #96 but not covered by it, guess we track these two things separately. Thanks for digging in @cocowalla!
I think you might be misunderstanding each other, I can see both interpretations of that statement and think that From my perspective is intended to put the later statement in context - does this fit with the goals of the project? (I'm not sure what the answer is, at this point :-)) #96 won't be an issue on Unix, as file "locks" are only advice on that platform, unlike Windows where a lock is enforced. Because of that, this ticket is something of an inverse to that problem. I think we could attempt to implement this, but because the file will still "exist" from the app's perspective on Unix (it will just be unlinked from its name in the directory hierarchy, until the process closes the file handle), we can't do it in response to an exception and instead, this would necessitate some kind of monitor process, AFAIK (need to grab my copy of Kerrisk's book to confirm!). I'm not sure it will be worth that amount of effort, but keeping an open mind if someone is interested in exploring it. To answer your original questions, @asset-2 👍
Not at this point, sorry.
At this point it's probably an operational problem to solve (avoid accidentally deleting log files by using automated runbooks/scripts to manage production servers). |
@cocowalla @nblumhardt Thank you guys for the questions consideration. |
I need this feature and my use case is that I move logs to another storage on a regular schedule for a bank application, so we roll logs daily, but audit happens every 2 hrs, and we have to remove the processed logs out without restarting the application. For perspective, log4net and NLog does this, but we chose serilog because it just works - except for recreating deleted logs that it has handle to. I'm wondering if adding a conditional to check if a file exist would fix it. |
Hi, just wanted to chime in and say that I'm getting around this issue by calling |
Hi, is there a way to implement it? I think it's a realistic problem. think about: high availability application that cannot be started during operation hours; now the log disk is full. the infrastructure team clears all files to free up space. |
Is there a possibility to adopt a buffering style approach where entries enter a queue and flush after some limit is reached? It would reduce the checks required for file existence by a significant factor and could possibly leverage the existing buffering options available. Another approach may lay in leveraging filesystem hooks to determine if the file was deleted, and temporarily cache new entries until another file can be created and written to. |
@impr0t the Serilog File Sink already supportes buffered output. The default is not to buffer output, because you risk losing logs in the event of a crash. I think this is a pretty sane default.
Hmm, unfortunately I think any approach that risks losing logs is probably a non-starter. |
I often get into log folder and delete everything under the log folder to save sapces. Edited: Test run phase: Actual deletion phase: |
Hi,
You can log an issue in my repo for more features and requests Would be happy to get your feedback: |
It is not a new issue. It partially interacts with #36, #87, #96.
Service uses an hour as rolling interval. Current log file has been deleted in-between. Logger waits till chosen interval ends and then only creates new file to log things. All events and messages are ignored during remained time. Expected behavior for me is to recreate file immediately and write to it, as soon as we have something to log.
Do you guys plan to implement this thing? What possible workarounds do you suggest?
The text was updated successfully, but these errors were encountered: