-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"Reading chunk length error" after a Python crash at the end of the recording, any solution ? #83
Comments
You don't need a footer for a valid XDF file, but if the recording crashes without properly closing, you will end up with invalid data. I don't know if there are any repair tools available, but in principle it should be possible to use all chunks up until the corrupted one. |
You can use XDFBrowser to inspect the file: https://github.com/xdf-modules/XDFBrowser, which can give you a little bit more info about the misbehaving chunk. Considering the recording stopped, the warning you see is likely simply the data loss, and the recorder stopped writing to the file before the chunk (and the footer) was correctly finished. I also don't know about any tool for repairing but that's certainly doable and not too hard. I don't know about your pipeline for fif-conversion. If you share it, we can give more advice. Following these two example scripts, wouldn't first loading with pyxdf, then conversion to mne,Raw and then saving not work? |
Thanks for the comments and advices guys. To give you more details, I'm using LSL to record multiple streams (EEG, Eye tracker, Markers) synchronously and a modified version of Open Sesame to run the experiment. Thanks, Unfortunately I cannot attached one of those .xdf files because this file type is not supported... |
I struggle a little bit with understanding your issue here. What i understand is that there is an additional marker stream, and you want to use that one to create annotations for the mne.Raw. But, the timestamps of this marker stream are not aligned well? What i usually do is create a plot of the timestamps to inspect the distribution. If they are only shifted (but not clustered due to a network issue), you might be able to recover them. Did you start recording with LabRecorder and your experimental task via a script, or was everything started separate and manually? This seems to be in addition to the file recording having been aborted prematurely. Might be that both was caused by a common error in the recording PC, but the fix is probably different. |
I agree that these might be two separate issues. FYI, I've just added an option to view XDF chunks to MNELAB. It's pretty much what XDFBrowser does, but it should be easier to deploy (not everyone is familiar with compiling a C++ program for those who cannot use the provided Windows binary). If you want to use this feature right now (i.e. before the next release), you can install the dev version with:
The option is then available under "File" – "Show XDF chunks...". |
Hi guys,
I'm trying to read .xdf files with the pyxdf.load_xdf() function but I have this error for some of my files :
Error reading chunk length
Traceback (most recent call last):
File "/user/jbelo/home/anaconda3/lib/python3.7/site-packages/pyxdf/pyxdf.py", line 237, in load_xdf
chunklen = _read_varlen_int(f)
File "/user/jbelo/home/anaconda3/lib/python3.7/site-packages/pyxdf/pyxdf.py", line 487, in _read_varlen_int
raise RuntimeError("invalid variable-length integer encountered.")
RuntimeError: invalid variable-length integer encountered.
got zero-length chunk, scanning forward to next boundary chunk.
For those files, Python crash during the recording at the very end of the experiment and even if the experiment was already finished it seems that the files were corrupted by it.
I check in the files and there is no "Footer" dict.
And the second problem is when I want to transform my .xdf files in .fif ones (using MNElab), it doesn't work probably because of this error.
Do you have any suggestion to solve this problems or those files are definitely unusable ?
Thanks in advance,
Joan
The text was updated successfully, but these errors were encountered: