Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not working with OVH & Proxmox #81

Open
adiantek opened this issue Jun 13, 2023 · 3 comments
Open

Not working with OVH & Proxmox #81

adiantek opened this issue Jun 13, 2023 · 3 comments

Comments

@adiantek
Copy link

Hi,

Idk why but ndppd doesnt write adverts to OVH gateway and I'm unable to use IPv6 inside LXC containers.

root@ovh1:~# ndppd -vvvv
(notice) ndppd (NDP Proxy Daemon) version 0.2.4
(notice) Using configuration file '/etc/ndppd.conf'
(debug) {
(debug)     proxy vmbr0 {
(debug)         rule 2001:41d0:602:XXXX::/64 {
(debug)             static
(debug)         }
(debug)     }
(debug) }
(warning) Low prefix length (64 <= 120) when using 'static' method
(debug) fd=3, hwaddr=d0:50:99:de:ef:d
(debug) iface::allmulti() state=1, _name="vmbr0"
(debug) proxy::create() if=vmbr0
(debug) rule::create() if=vmbr0, addr=2001:41d0:602:XXXX::/64, auto=no
(debug) iface::fixup_pollfds() _map.size()=1
(debug) reading routes
(debug) iface::read() len=86
(debug) iface::read_solicit() saddr=2001:41d0:602:3fff:ff:ff:ff:fd, daddr=ff02::1:ff00:0, len=86
(debug) proxy::handle_solicit() saddr=2001:41d0:602:3fff:ff:ff:ff:fd, taddr=2001:41d0:602:3f5b::
(debug) checking 2001:41d0:602:XXXX::/64 against 2001:41d0:602:3f5b::
(debug) iface::read() len=86
(debug) iface::read_solicit() saddr=2001:41d0:602:3fff:ff:ff:ff:fd, daddr=ff02::1:ff00:0, len=86
(debug) proxy::handle_solicit() saddr=2001:41d0:602:3fff:ff:ff:ff:fd, taddr=2001:41d0:602:3f5b::
(debug) checking 2001:41d0:602:XXXX::/64 against 2001:41d0:602:3f5b::
(debug) iface::read() len=86
(debug) iface::read_solicit() saddr=2001:41d0:602:3fff:ff:ff:ff:fd, daddr=ff02::1:ff00:0, len=86
(debug) proxy::handle_solicit() saddr=2001:41d0:602:3fff:ff:ff:ff:fd, taddr=2001:41d0:602:3f5b::
(debug) checking 2001:41d0:602:XXXX::/64 against 2001:41d0:602:3f5b::
(debug) iface::read() len=86
(debug) iface::read_solicit() saddr=2001:41d0:602:3fff:ff:ff:ff:fd, daddr=ff02::1:ff00:0, len=86
(debug) proxy::handle_solicit() saddr=2001:41d0:602:3fff:ff:ff:ff:fd, taddr=2001:41d0:602:3f5b::
(debug) checking 2001:41d0:602:XXXX::/64 against 2001:41d0:602:3f5b::
(debug) iface::read() len=86
(debug) iface::read_solicit() saddr=2001:41d0:602:3fff:ff:ff:ff:fd, daddr=ff02::1:ff00:0, len=86
(debug) proxy::handle_solicit() saddr=2001:41d0:602:3fff:ff:ff:ff:fd, taddr=2001:41d0:602:3f5b::
(debug) checking 2001:41d0:602:XXXX::/64 against 2001:41d0:602:3f5b::
(debug) iface::read() len=86
(debug) iface::read_solicit() saddr=fe80::606a:fdff:fe4c:f6f8, daddr=2001:41d0:602:XXXX::2, len=86
(debug) iface::read() len=86
(debug) iface::read_solicit() saddr=fe80::1018:5dff:fede:7c64, daddr=2001:41d0:602:XXXX::2, len=86
(debug) iface::read() len=24
(debug) iface::read_advert() saddr=fe80::80a4:e7ab:ff7f:0, taddr=fe80::606a:fdff:fe4c:f6f8, len=24
(debug) iface::read() len=24
(debug) iface::read_advert() saddr=fe80::80a4:e7ab:ff7f:0, taddr=fe80::1018:5dff:fede:7c64, len=24
(debug) iface::read() len=86
(debug) iface::read_solicit() saddr=fe80::606a:fdff:fe4c:f6f8, daddr=fe80::d250:99ff:fede:ef0d, len=86
(debug) iface::read() len=86
(debug) iface::read_solicit() saddr=fe80::1018:5dff:fede:7c64, daddr=fe80::d250:99ff:fede:ef0d, len=86
(debug) reading routes
(debug) iface::read() len=86
(debug) iface::read_solicit() saddr=::, daddr=ff02::1:ff4c:f6f8, len=86
(debug) proxy::handle_solicit() saddr=::, taddr=fe80::606a:fdff:fe4c:f6f8
(debug) checking 2001:41d0:602:XXXX::/64 against fe80::606a:fdff:fe4c:f6f8
(debug) iface::read() len=86
(debug) iface::read_solicit() saddr=::, daddr=ff02::1:ff00:102, len=86
(debug) proxy::handle_solicit() saddr=::, taddr=2001:41d0:602:XXXX::102
(debug) checking 2001:41d0:602:XXXX::/64 against 2001:41d0:602:XXXX::102
(debug) session::create() pr=b3f65890, saddr=::, daddr=ff02::1:ff00:102, taddr=2001:41d0:602:XXXX::102 =b3f66540
(debug) iface::write_advert() daddr=::, taddr=2001:41d0:602:XXXX::102
(debug) iface::write() daddr=::, len=32
(debug) session::~session() this=b3f66540
(debug) iface::read() len=86
(debug) iface::read_solicit() saddr=2001:41d0:602:XXXX::102, daddr=ff02::1:ff00:2, len=86
(debug) proxy::handle_solicit() saddr=2001:41d0:602:XXXX::102, taddr=2001:41d0:602:XXXX::2
(debug) checking 2001:41d0:602:XXXX::/64 against 2001:41d0:602:XXXX::2
(debug) session::create() pr=b3f65890, saddr=2001:41d0:602:XXXX::102, daddr=ff02::1:ff00:2, taddr=2001:41d0:602:XXXX::2 =b3f66540
(debug) iface::write_advert() daddr=2001:41d0:602:XXXX::102, taddr=2001:41d0:602:XXXX::2
(debug) iface::write() daddr=2001:41d0:602:XXXX::102, len=32
(debug) session::~session() this=b3f66540
(debug) iface::read() len=86
(debug) iface::read_solicit() saddr=2001:41d0:602:XXXX::102, daddr=ff02::1:ff00:0, len=86
(debug) proxy::handle_solicit() saddr=2001:41d0:602:XXXX::102, taddr=2001:41d0:602:XXXX::
(debug) checking 2001:41d0:602:XXXX::/64 against 2001:41d0:602:XXXX::
(debug) session::create() pr=b3f65890, saddr=2001:41d0:602:XXXX::102, daddr=ff02::1:ff00:0, taddr=2001:41d0:602:XXXX:: =b3f66540
(debug) iface::write_advert() daddr=2001:41d0:602:XXXX::102, taddr=2001:41d0:602:XXXX::
(debug) iface::write() daddr=2001:41d0:602:XXXX::102, len=32
(debug) session::~session() this=b3f66540
(debug) iface::read() len=24
(debug) iface::read_advert() saddr=2001:41d0:602:XXXX:80a4:e7ab:ff7f:0, taddr=2001:41d0:602:XXXX::102, len=24
(debug) iface::read() len=86
(debug) iface::read_solicit() saddr=fe80::606a:fdff:fe4c:f6f8, daddr=fe80::d250:99ff:fede:ef0d, len=86
^C(error) Shutting down...
(notice) Bye
(debug) iface::~iface()
(debug) iface::allmulti() state=1, _name="vmbr0"

My sysctl.conf on the host:

net.ipv6.conf.all.autoconf = 0
net.ipv6.conf.default.autoconf = 0
net.ipv6.conf.vmbr0.autoconf = 0

net.ipv6.conf.all.accept_ra = 0
net.ipv6.conf.default.accept_ra = 0
net.ipv6.conf.vmbr0.accept_ra = 0

net.ipv6.conf.all.router_solicitations = 1
net.ipv6.conf.default.forwarding = 1
net.ipv6.conf.all.forwarding = 1
net.ipv6.conf.default.proxy_ndp = 1
net.ipv6.conf.all.proxy_ndp = 1

net.ipv4.ip_forward = 1

Host network configuration:

root@ovh1:~# ip -6 a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 state UNKNOWN qlen 1000
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
4: vmbr0: <BROADCAST,MULTICAST,ALLMULTI,UP,LOWER_UP> mtu 1500 state UP qlen 1000
    inet6 2001:41d0:602:XXXX::2/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::d250:99ff:fede:ef0d/64 scope link
       valid_lft forever preferred_lft forever
5: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000
    inet6 fe80::3c05:3fff:fe3f:e8e8/64 scope link
       valid_lft forever preferred_lft forever
root@ovh1:~# ip -6 route
::1 dev lo proto kernel metric 256 pref medium
2001:41d0:602:XXXX::/64 dev vmbr0 proto kernel metric 256 pref medium
fe80::/64 dev vmbr0 proto kernel metric 256 pref medium
fe80::/64 dev vmbr1 proto kernel metric 256 pref medium
default via 2001:41d0:602:40ff:ff:ff:ff:ff dev vmbr0 proto kernel metric 1024 onlink pref medium

Guest network configuration:

root@ovh1:~# pct enter 102
root@dcbot:~# ip -6 a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 state UNKNOWN qlen 1000
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0@if33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000
    inet6 2001:41d0:602:XXXX::102/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::606a:fdff:fe4c:f6f8/64 scope link
       valid_lft forever preferred_lft forever
3: eth1@if34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000
    inet6 fe80::1cc1:f6ff:fef9:f38/64 scope link
       valid_lft forever preferred_lft forever
root@dcbot:~# ip -6 route
::1 dev lo proto kernel metric 256 pref medium
2001:41d0:602:XXXX::/64 dev eth0 proto kernel metric 256 pref medium
fe80::/64 dev eth1 proto kernel metric 256 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
default via 2001:41d0:602:XXXX::2 dev eth0 proto static metric 1024 pref medium

Host -> gateway works
Host -> global IPv6 works
Host -> Guest works

Guest -> gateway doesn't work
Guest -> global IPv6 doesn't work
Guest -> Hsot works

@congzhangzh
Copy link

would you mind giving a try for new implement, #82 @adiantek

@ValdikSS
Copy link

ValdikSS commented Oct 4, 2024

#71 (comment)

@benaryorg
Copy link

benaryorg commented Nov 13, 2024

I just stumbled across this issue and gonna chip in with a few comments since I've also been using ndppd with OVH for quite a few years now.


First of all, if you're talking about OVH I assume you mean their dedicated server offerings (which your gateway addresses align with). Note that some newer DCs switched to routed setups so using ndppd isn't even necessary in some instances, however I am also assuming you're using this on the older infra that still uses NDP for address resolution.


Second, I see you've masked out your address in the logs, however looking at them I see this:

(debug) checking 2001:41d0:602:XXXX::/64 against 2001:41d0:602:3f5b::

Assuming you used search&replace you either used vi(m) without a trailing /g to replace multiple occurrences per line, or you may have had a typo there? Since OVH constructs the gateway address by using the first address in your /56 and then replacing the lower half of each hextet with ff the gateway would've been 2001:41d0:602:3fff:ff:ff:ff:ff which again matches the address that the NDP request comes from, but does not match the route on your host.


That said, please don't set up your IPs like that.
You really really do want to have an IP address on your eth0.
It's all fine if your server comes up and the bridge is there and no containers run, but as soon as your first container boots up you are in limbo so to say.
If that container ever shuts down your entire host becomes unreachable since a bridge goes down if no links are connected.
On hosters which require an onlink route I'd probably add the hosts IP to eth0 as a /128, however with OVH there technically is a /56 on the link, so you can absolutely have your 2001:db8:1234:5678::1/56 on eth0 and your 2001:db8:1234:5678::2/64 on vmbr0.
Since the longest matching prefix wins this works out nicely for both ends.
The only caveat is that now you have one address of the /64 outside that link which means your nddpd config should include that specific address too in the other direction:

proxy eth0 {
    rule 2001:db8:1234:5678::/64 {
        static
        # alternatively, to avoid polluting NDP tables (remove the static if you use iface)
        #iface vmbr0
    }
}
proxy vmbr0 {
    rule 2001:db8:1234:5678::1/128 {
        static
    }
}

(which now that I read the debug log probably also means that your config had the wrong interface specified too.?)
(note that I double checked whether the above configuration is correct in both my current in-production Nix config as well as my older Puppet configuration that ran on Gentoo and yes, the above is what works, specifying proxy vmbr0 and then the /64 really shouldn't work, although I do have router no in some places but that shouldn't matter for OVH)


Now taking the above and going one step further; you seem to be setting the gateway for the containers statically, I would recommend just running radvd (or something like it) and telling the containers to accept_rtadv=1, which means you can get rid of the ...::2 entirely since then the routing part will just use the link-local address. However that only really works if you can make sure that your SLAAC addresses remain static (I mean.… your MAC addresses probably are static anyway).
Oh, and if radvd does support the route info stuff nowadays (I didn't check); remember to also increase your containers' net.ipv6.conf.*.accept_ra_rt_info_{min,max}_plen (my sleepy brain doesn't remember which one of them), but I would generally advice to use ndppd to proxy the /128 onto the bridge instead, it just works and doesn't require having to fiddle with OS settings to get the guests to accept the extra route info.


Summarizing; I see a few issues with the configuration and logs you've posted (specifically the interface in the config as well as the mismatched addresses) that leads me to believe this was actually a configuration error rather than an issue with nddpd.

Either way I've had a bunch of hosts with such a bridge&nddpd setup running with OVH (and others) at different DCs and whatnot, and for me it all worked fine on every single one of them, so if you have some issues there I'd be happy to help if you ever want to get that stuff sorted out.
In the meanwhile I think the issue can be closed since the setup probably doesn't exist anymore in that form as the issue is from last year (though I'm not a maintainer, just chipping in with an opinion on that one).


Edit: also a subtle but very huge thanks to everyone who works or worked on ndppd, it's saved me so many headaches <3

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants