-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
allow configuring hairpinning on default nomad bridge #13834
Conversation
fe51bb0
to
e36139b
Compare
e36139b
to
184587c
Compare
184587c
to
991ba2e
Compare
I've just finished testing in a staging cluster I have access to and it seems to work as intended. Please let me know if there's anything I should change or update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @A-Helberg, and thanks for this PR! I'm reading #13352 and seeing that my colleagues @jrasell and @DerekStrickland recommended making this configurable on the basis of backwards compatibility.
We're not fingerprinting this value, which means that a cluster could have half the clients with the value set and the other half without (this kind of thing becomes painful for very large clusters). And the scheduler would have no idea which is which and place your Loki allocation on a node that doesn't support the networking configuration. That becomes a little painful for operators because then they'll have to set a constraint to ensure they land on the right node unless they know all their nodes have it set. But maybe that's ok so long as the administrator marks nodes correctly via node metadata.
That being said, the CNI bridge plugin has a bunch of knobs like this that we don't currently expose. In #13824 there was a notion about configuring the bridge CNI template via a config template file, rather than by exposing yet another config value like this. That would give you a lot more flexibility in the case that the bridge plugin got new features somehow.
@@ -142,6 +142,9 @@ client { | |||
- `bridge_network_subnet` `(string: "172.26.64.0/20")` - Specifies the subnet | |||
which the client will use to allocate IP addresses from. | |||
|
|||
- `bridge_network_hairpin` `(bool: false)` - Specifies wether the bridge should |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- `bridge_network_hairpin` `(bool: false)` - Specifies wether the bridge should | |
- `bridge_network_hairpin` `(bool: false)` - Specifies whether the bridge should |
@@ -300,6 +300,9 @@ type ClientConfig struct { | |||
// bridge network mode | |||
BridgeNetworkName string `hcl:"bridge_network_name"` | |||
|
|||
// BridgeNetworkHairpin sets wether to allow hairpinning on the bridge |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// BridgeNetworkHairpin sets wether to allow hairpinning on the bridge | |
// BridgeNetworkHairpin sets whether to allow hairpinning on the bridge |
Thanks for the feedback! I'll address both comments asap. The templating idea sounds like a better long term solution. It would decouple nomad from any future changes in the cni. On the other hand if you decide to go with this pr, I'm not familiar with fingerprinting for nomad. I would appreciate a file to start working from. |
Hi @A-Helberg 👋 We ended up missing this PR and re-implemented this feature in #15961, so I'm going to close this. Apologies for that, but thank you so much for all the work! |
Hairpinning is required by a lot of clustering applications (eg. Grafana Loki).
These applications connect to all members of their cluster, including themselves.
Currently they cannot connect to themselves as hairpin-mode is not enabled on the default nomad bridge.
As suggested in #13352 I've added a configuration option that allows a user to enable hairpinning on the default nomad cni bridge.