Skip to content
This repository has been archived by the owner on Oct 26, 2022. It is now read-only.

netlink-proto: let the user handle non-solicitated message in a more flexible way #209

Open
dzamlo opened this issue Nov 30, 2021 · 6 comments
Labels
help wanted Extra attention is needed

Comments

@dzamlo
Copy link
Contributor

dzamlo commented Nov 30, 2021

Currently, all unsolicited messages are sent to an unbounded channel. That's not always the best options. netlink-proto would be more flexible if the user was able to pass an async "callback" to handle them.

Some errors (like ENOBUFS #139 ) could be handled in a similar way.

@little-dude
Copy link
Owner

I like the idea. Perhaps a via a trait rather than a closure? That will be quite a breaking change though.

@little-dude
Copy link
Owner

Thinking about this a bit more, I'm less convinced.

This issue with an async handler is that we either need to spawn a task every time the handler is called, or create a future that we poll from within the connection.

Spawning a task may have a cost, if we receive a lot of messages. It also means that we must pick a runtime. Creating a future that we store in the connection comes with some complexity, and mean we have to decide on a strategy for creating and polling this future: do we create a new future for each message as soon as they arrive, or do we wait for the current handler to finish? These decisions can have a big impact on performance so I'm not sure the library should make this decision.

@little-dude
Copy link
Owner

A synchronous handler would probably allow more flexibility, but then if the user passes in a handler that blocks, our connection will get stuck.

@dzamlo
Copy link
Contributor Author

dzamlo commented Dec 19, 2021

I'm not sure the library should make this decision.

But the library already make a decision: stuffing the messge in an unbounded queue with no backpressure. This means that the library has to make a descision and I believe the current choice is maybe not optimal.

But I don't know enough about the lowlevel details of async to really have an idea of what is the best solution. Only that it should likely be more flexible.

@little-dude
Copy link
Owner

But the library already make a decision: stuffing the messge in an unbounded queue with no backpressure. This means that the library has to make a descision and I believe the current choice is maybe not optimal.

Fair enough.

But I don't know enough about the lowlevel details of async to really have an idea of what is the best solution. Only that it should likely be more flexible.

I think the most flexible would be a synchronous closure but as I said, this could also be a footgun because it could block the connection.

I'd like to get more input on this.

@stbuehler
Copy link
Contributor

I think a breaking change is appropriate.

I think it would be good to have:

  1. a way to handle multicasts (usually with sequence_number == 0 I think?)
  2. a way to handle unexpected responses (sequence_number != 0 ?) - I think this deserves a different handler, especially once netlink-proto: if messages are never acknowledged, the Protocol.pending_requests keep growing #138 gets fixed; they should not turn up as "multicast events"
  3. a way to handle various errors (decode, encode, ENOBUF, other IO, ...?) - this one could probably be done later

All these should be "opt-in" imho - i.e. messages and errors should not be stored in an infinite backlog unless the user requested them.

For the messages I think unbounded channels are ok, but wrap them in a way that it only sends if there is at least one receiver.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

3 participants