Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

It should have been like this from the beginning. Also: #18

Open
amfern opened this issue Aug 9, 2020 · 6 comments
Open

It should have been like this from the beginning. Also: #18

amfern opened this issue Aug 9, 2020 · 6 comments

Comments

@amfern
Copy link
Member

amfern commented Aug 9, 2020

It should have been like this from the beginning. Also:
For serialization, have a look over here: https://github.com/wucke13/mavlink-msp-bridge/blob/master/src/msp.rs#L405
For deserialization, over there: https://github.com/wucke13/mavlink-msp-bridge/blob/master/src/msp.rs#L456 and here https://github.com/wucke13/mavlink-msp-bridge/blob/master/src/msp.rs#L230

The read macro could easily be adjusted to not use the Read trait but instead do some slicing operation, maybe with an offset?

Originally posted by @wucke13 in #16 (comment)

@amfern
Copy link
Member Author

amfern commented Sep 10, 2020

@wucke13 I went over the decode code, and I think there is a problem. It assumes that the Read connection holds a complete MSP packet, but this won't always be the case.
What if only half of the MSP packet bytes received, and the other half is still in transmit.
IIUC The decode fn will try to parse the first half and fail, then it will parse the second half and fail. effectively losing a valid message

@wucke13
Copy link
Collaborator

wucke13 commented Sep 10, 2020

Well then have a look at how my decoder works. It knows upfront how many bytes are needed for the next fragment of a msp message, and only continues if this many bytes are present. It can easily be ported to be non-blocking code as well. Or is your comment aimed at my decoder?

@amfern
Copy link
Member Author

amfern commented Sep 10, 2020

my comment aimed at https://github.com/wucke13/mavlink-msp-bridge/blob/master/src/msp.rs#L456
How do you know upfront how many bytes to expect? can you point me to the code where it happens?

@wucke13
Copy link
Collaborator

wucke13 commented Sep 13, 2020

Ah, ok. The decoder is a state machine. Each state represents one part of the message. Almost all states require a fixed amount of bytes to be parsed, the only exception is the actual payload. However, before we come to parsing the payload we already parsed the fields, and thus know how big the payload is gonna be. This gives us one guarantee:

At every point in time, we know exactly how many bytes we have to read to parse the next part/do the next state transition.

Does that make sense so far?

To your first question: Currently, the get! macro reads exactly the required amount of bytes from the reader, in a blocking fashion. So no, the parser won't break if only half a package received already, it will block. However, notice this: This code can be very easily ported to be non blocking or async. To be non-blocking, one just has to check the amount of readable bytes from the Reader (not sure how, but there is a way to do it for sure) and return with an io::ErrorKind::WouldBlock or similar. To make it async, one only needs to use AsyncReader and await the read_exact.

Are there questions left on your side?

@amfern
Copy link
Member Author

amfern commented Sep 14, 2020

Ok, I got it now. thanks. do you think it would be better to refactor this library to use the AsyncRead or Read trait instead of calling parse() with a single byte? so it would work similarly to your deser.

@wucke13
Copy link
Collaborator

wucke13 commented Sep 14, 2020

In general yes. However, Read is not no_std compatible. In general, I'm not aware of any async read which is. Dropping no_std support would be a high price, IMO.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants