-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
enable writing (MNE-Python) annotations #77
Comments
I didn't know that we don't support MNE annotations – we definitely should!
It should be onset, duration, and description.
I think we actually support channel-specific annotations already, but it's not available in the plotting GUI yet (so users cannot add them other than programmatically). |
Do you perhaps want to give it a shot @cbrnr? I think it'd also benefit MNELAB 😏 |
Give me some time and I can come up with something. Could it be as easy as using |
That is already supported as far as I can see: You'd just get events via In the VMRK these will then be written as such:
With the "Stimulus" event type, and the MNE event code (an integer) being transformed to a string In my example above, the duration of each of those events is 1 sample. With pybv we can currently also use the So indeed, (maybe what you had in mind) one way to improve the current state is to provide a convenience function that goes from MNE annotations to Having that said, I was interested in more advanced usage, because the way above still loses the actual annotation in favor of some integer code. And the actual annotation is not written to the BrainVision file then (only the integer code), and you'd have to keep the map somewhere else. I thought about going beyond the Something like this:
Starting from
We could have:
And I imagine that participants could pass this information to pybv via a new parameter When both the |
Having slept over it, I think we should NOT introduce a new parameter "annotations" (or the like) and instead make a breaking API change to our parameter "events".
{
"onset": [],
"duration": [],
"description": [],
"type": [],
"channel": [],
} With the following rules:
In the simplest scenario, supplying a dict as below works: {
"onset": [6230, 6804, 7849, 8475],
"description": [1, 3, 4, 6],
} and would result in:
That also shows that users can fix their pipelines easily to accommodate the new API: # "old" events was a np.ndarray of shape (n, 2) or (n, 3)
new_events = {}
new_events["onset"] = events[:, 0]
new_events["description"] = events[:, 1]
if events.shape[1] == 3:
new_events["duration"] = events[:, 2] Here an example of a more complex scenario, which shows that users can specify much more information with the new API, facilitating export from MNE annotations: {
"onset": [1000, 2000, 3000],
"duration": [25, 50, 75],
"description": [1, "BAD signal dropout", 555],
"type": ["Stimulus", "Comment", "Response"],
"channel": [0, 64, 0],
} which would result in:
|
Also, I can take this over @cbrnr before asking you in #77 (comment) I hadn't known that I feel strongly about this one 😆 |
Please go ahead! |
In JS and JSON, one would opt to have an array of objects instead; in Python terms: a list of dicts. This is what I would like to see here too. "channels" should be pluralized.
This makes the properties of each individual annotation very clear. |
I'd like to emphasize that I don't think "0" is a good pick to mean "all channels". I think
|
I like that notation, but it would make specifying an input for pybv much harder as a user (unwieldy), wouldn't it?
I agree in principle, but I don't think the BrainVision spec allows for that:
One could think about allowing it, and then internally in pybv write an event as below. Example input: {
"onset": [1000],
"description": ["BAD signal dropout"],
"type": ["Comment"],
"channel": [[1, 23, 55]],
} Corresponding output:
Note how the same event is written 3 times with only the channel that it refers to differing. It's ugly, but I don't see another spec-conformant way (apart from not supporting it at all), do you? |
In response to #77 (comment): We need to follow the BrainVision spec, and your proposal unfortunately does not work within those constraints 😞 |
👍
[...]
It is suuuper ugly and nobody could possibly want that. But the JSON-inspired notation I proposed above makes it trivial to resolve this issue, while staying very readable and trivial to parse: [
{
'onset': 1000,
'description': 'BAD signal dropout',
'type': 'Comment',
'channel': 1
},
{
'onset': 1000,
'description': 'BAD signal dropout',
'type': 'Comment',
'channel': 23
},
{
'onset': 1000,
'description': 'BAD signal dropout',
'type': 'Comment',
'channel': 55
}
] |
Since MNE's Annotations don't immediately allow for an output that would suit this API, I propose we add |
@hoechenberger I feel like you are not reading my messages 😂 This thread is about how to make the most out of the BrainVision format for annotations, I like your proposals and they make sense --> but they don't fit with the BrainVision spec |
My proposals were for the "annotations" parameter you suggested to introduce... I'm confused now! 😬🙈 |
Okay, I think I get part of it now - I think I was sometimes also talking about how to write it to VMRK, whereas you were always talking about the input into "write_brainvision". :-)
I am reconsidering my argument here: Let's go with your proposed "list of dicts" structure. It is not significantly harder to prepare, to go from my initial structure to yours it'd just be: old_annots = {
"onset": [1000, 2000, 3000],
"duration": [25, 50, 75],
"description": [1, "BAD signal dropout", 555],
"type": ["Stimulus", "Comment", "Response"],
"channel": [0, 64, 0],
}
# go from old_annots (dict of lists) to new_annots (list of dicts)
new_annots = []
n_annot = len(old_annots["onset"])
for i_annot in range(n_annot):
annot = {}
for key in old_annots.keys():
annot[key] = old_annots[key][i_annot]
new_annots.append(annot)
# new_annots
[{'onset': 1000,
'duration': 25,
'description': 1,
'type': 'Stimulus',
'channel': 0},
{'onset': 2000,
'duration': 50,
'description': 'BAD signal dropout',
'type': 'Comment',
'channel': 64},
{'onset': 3000,
'duration': 75,
'description': 555,
'type': 'Response',
'channel': 0}]
and the gain in readability and clarity of the "list of dicts" beats the simplicity of the other structure. We can also use that list of dicts structure for coordinates in #44 and have a more consistent API overall |
Here we misunderstood each other again (you talking ONLY about input, me talking ALSO about writing to VMRK) Now that that's cleared up, I agree with you:
That just leaves us with a single problem: How to encode in VMRK if one event pertains to several (more than 1, less than "all") channels? My proposal was:
Do you see a BrainVision spec compliant alternative? If there is no alternative, there are two options:
|
Haha so many misunderstandings!! So sorry about the confusion!! On a somewhat unrelated side note, I'm currently in Berlin and happy to meet up for a coffee or beer if you feel like it 👍 Just drop me a message on Discord |
If repeating a marker for each affected channel is the only compliant solution then I'd go for it. Just to make sure I understand the format, a trailing zero means "all channels", correct? In MNE-Python, all channels corresponds to an empty list (https://mne.tools/dev/generated/mne.Annotations.html#mne-annotations), so how would the API handle this (I lost track so maybe @sappelhoff can you summarize the current state)? |
correct
yes
A user passes the following input to
pybv will then write the following to VMRK:
I think logically these events in VMRK make sense and they are compliant. It's still not pretty, but we have our constraints to work with here. We still need to check that MNE-Python, EEGLAB, Fieldtrip, Brainstorm would properly read this, or at least not produce a bunch of nonsense. the input for the |
Thanks! This looks good. We could also directly accept an In addition to the readers you mentioned, we should first test if BrainVision Analyzer reads such files. |
I just revisited an old MNELAB issue and found out that being able to pass |
@cbrnr I think we did find a solution, and a consistent API for writing events to VMRK (this issue) and coordinates to VHDR (#44 (comment)). The basic API is described in #77 (comment) -> input to Talking about both events (this issue) and coordinates (#44), we "only" need to:
I am currently fully blocked due to having to submit my dissertation soon, so if you want to give it a shot - be my guest! Else, I hope to be able to work on this some time this year, as I consider it a major improvement that I definitely want for pybv. |
Thanks @sappelhoff! I don't have time right now either, but I'm happy to help. I think it would be nice if the |
agreed to support ndarray (the "custom" type we currently support, which can be easily indexed from the mne type), mne.annotations, and a standard list of dict I think it's important to offer non-mne options as well |
Since pybv does not depend on MNE, how would you implement a parameter accepting also an |
mmh and in "pure pybv" we'd only have ndarray or list of dicts? - Yes, that sounds good. So far I had thought to have a check of parameter types, and raise if the then optional mne dependency is not present, but I like your suggestion better. |
Currently pybv has limited functionality to write to the VMRK file:
However, the BrainVision VMRK file would support much more, see:
We could think about a feature that can write annotations as used for example in MNE-Python: Each annotation having an onset,
offsetduration, and description. We could evengo beyond what MNE-Python currently (?) supports andsupport channel specific annotations (although that could be done later).Overall I envision that users could pass both "events" and "annotations" to
write_raw_brainvision
and that these two inputs would be internally merged in a reasonable way and all written to the VMRK file.Together with #44 that'd bring us one step closer to being able to properly export MNE-Python
raw
objects to BrainVision, see also: https://github.com/mne-tools/mne-python/blob/53d0f5de8a8a99e3f4211a8dc1361930e3cc3cd8/mne/export/_export.py#L43-L44WDYT?
The text was updated successfully, but these errors were encountered: