-
Notifications
You must be signed in to change notification settings - Fork 121
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Map Sync Time #31
Comments
你好,
你的邮件已收到,谢谢!!
此为自动回复。
|
There won't be any risk to journal integrity -- it just means that your sync() will occasionally be a no-op. If this is acceptable, do you even need to sync? Every 2**log2_ppc sector writes from FatFS will result in an implicit sync anyway. |
Thankyou for confirming! With FatFS, we have found that a f_write will write the data to flash, but it will not update the file size, so if a file is not specifically closed or f_sync command is used and we have a system crash, the file will exist, but will be size zero and the data is not recoverable. http://elm-chan.org/fsw/ff/doc/sync.html |
Sorry, that was probably ambiguous of me. What I meant to suggest was continuing to call f_sync as you are, but dropping the call to dhara_map_sync. This way the FatFS metadata will continue to be written out at regular intervals, and you will persist at regular intervals too due to the implicit sync that occurs every 15 sector writes (with a 2kB page size). The downside is that not every call to f_sync will guarantee to persist, but that's already the case if you are sometimes abandoning dhara_map_sync. |
Dropping the call to dhara_map_sync is an interesting idea. We'll try what we have for now, and review your suggestion in a future iteration. I sincerely appreciate your input and responsiveness on this issue. |
你好,
你的邮件已收到,谢谢!!
此为自动回复。
|
We are implementing a time-sensitive logging application where we store periodic sensor data to multiple files. Our implementation connects FatFSChan 0.14b with Dhara. Our Flash is a 4Gb NAND with 4k page size. Generally the system performs well and is robust. Each 1 minute, we perform an f_sync to minimize risk of data loss. We are storing data to at least 6 files, and the files say open for the duration of the logging session. After a period of time (2 hours for example), we would occasionally experience an f_sync execution duration greater than 1 second. Our system does not have sufficient memory to buffer sensor data resulting an loss of data from a sensor data buffer overflow.
Our analysis of the problem shows that garbage collection during
dhara_map_sync
in thewhile (!dhara_journal_is_clean(&m->journal)
loop is the primary cause of the system delay. Our mitigation is to exit thewhile
loop if the execution time exceeds our defined performance tolerance. We only enable the restriction on execution time when logging.Is there unforeseen risk to the integrity of the journal with this approach?
The text was updated successfully, but these errors were encountered: