Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I see ICMP6 neighbor solicitation in tcpdump, but ndppd doesn't seem to be doing anything #71

Open
ghost opened this issue Jul 15, 2021 · 15 comments

Comments

@ghost
Copy link

ghost commented Jul 15, 2021

debug output from ndppd with configuration

[root@avps ~]# ndppd -vvv
(notice) ndppd (NDP Proxy Daemon) version 0.2.4
(notice) Using configuration file '/etc/ndppd.conf'
(debug) {
(debug)     address-ttl 30000 
(debug)     proxy eth0 {
(debug)         autowire no 
(debug)         keepalive yes 
(debug)         promiscuous no 
(debug)         retries 3 
(debug)         router yes 
(debug)         rule 2605:a140:2045:1635::/64 {
(debug)             autovia no 
(debug)             static 
(debug)         }
(debug)         timeout 500 
(debug)         ttl 30000 
(debug)     }
(debug)     route-ttl 30000 
(debug) }
(warning) Low prefix length (64 <= 120) when using 'static' method
(debug) fd=3, hwaddr=0:50:56:40:a1:2d
(debug) iface::allmulti() state=1, _name="eth0"
(debug) proxy::create() if=eth0
(debug) rule::create() if=eth0, addr=2605:a140:2045:1635::/64, auto=no
(debug) iface eth0 {
(debug)   proxy 230b7e0 {
(debug)     rule 230b8d0 {
(debug)       taddr 2605:a140:2045:1635::/64;
(debug)       static;
(debug)     }
(debug)   }
(debug)   parents {
(debug)   }
(debug) }
(debug) iface::fixup_pollfds() _map.size()=1

And here is example of tcpdump output when I ping a random address in the block:

15:46:54.961970 IP6 2607:fb90:28c9:1893:1802:f5bb:3bb4:96cf > 2605:a140:2045:1635::1234: ICMP6, echo request, seq 1, length 40
15:46:54.962327 IP6 avps.owo69.me > ff02::1:ff00:1234: ICMP6, neighbor solicitation, who has 2605:a140:2045:1635::1234, length 32
15:46:56.006641 IP6 avps.owo69.me > ff02::1:ff00:1234: ICMP6, neighbor solicitation, who has 2605:a140:2045:1635::1234, length 32
15:46:57.030647 IP6 avps.owo69.me > ff02::1:ff00:1234: ICMP6, neighbor solicitation, who has 2605:a140:2045:1635::1234, length 32
15:46:58.054833 IP6 avps.owo69.me > 2607:fb90:28c9:1893:1802:f5bb:3bb4:96cf: ICMP6, destination unreachable, unreachable address 2605:a140:2045:1635::1234, length 88

And nothing is outputted from ndppd debug.

I used this information: http://blog.iopsl.com/ndppd-on-vultr-to-enable-fully-routed-64-for-ipv6/

@rodolfoul
Copy link

I have the precise same problem.
But ndppd will ocasionally work. Seems like some of the Solicitations are successfully proxied, while some aren't.
It's as if the ndppd's poll for solicitations gets only a part of all made solicitations, and those that are relevant end up getting left behind and not proxied.

@rodolfoul
Copy link

Ok, maybe for future reference. I had an issue on routing table. Seems like having two equally preffixed ipv6 interfaces messes up routing. That is, neighbor discovery packets were going through the wrong interface.

So all I had to do was fix it manually and ndppd started working perfectly.

@houmie
Copy link

houmie commented Nov 13, 2021

@ledlamp I have the exact same issue. Have you been able to resolve this, please?

@ghost
Copy link
Author

ghost commented Nov 13, 2021

@houmie nope, sorry. ndppd just won't seem to do anything. I even tried a /128.

and my ipv6 routes appear to be fine. the packets are definitely going to the right interface.

@houmie
Copy link

houmie commented Nov 30, 2021

Yes, I have done some research and it's been reported that this project doesn't work anymore.

See here: https://quantum2.xyz/2019/03/08/ndp-proxy-route-ipv6-vpn-addresses/

"The common wisdom is to run ndppd, a program that answers neighbour solicitation requests. It can be thought of as a replacement for the kernel’s NDP proxying feature. However, it has been relatively unmaintained, and multiple users reported that it does not work anymore. It did not work for me either."

It seems dnsmasq is a better solution. I haven't tried it yet.

@SpareSimian
Copy link

I've successfully used ndppd as packaged in the EPEL repository on my CentOS 7 gateway when communicating with AT&T's "Business in a Box" gateway. Their gateway wasn't configured to route through mine so I used ndppd to accomplish that. I've temporarily disabled IPv6 on my gateway as the AT&T gateway was intermittently losing its VOIP and IPv6 ability (while IPv4 continued working). But while it was up, ndppd worked great. I believe the EPEL version is packaging the master branch, not the new experimental branch. So maybe the criticism of not working only applies to the newer branch.

@shamefulCake1
Copy link

Turn on promiscuous mode.

Maybe by the config file setting, maybe by just manually setting it on the interfaces you are using.

@ValdikSS
Copy link

ValdikSS commented Oct 4, 2024

I'm not entirely sure what's going on, but indeed 'does not work' for me is the best I could describe it right now.

ndppd seems to send the neighbor advertisement packet (sendmsg succeeds), but I see absolutely nothing in tcpdump on any interface.

I have the following configuration, running git master:

route-ttl 30000

proxy ens3 {
   promiscuous yes  
   router yes

   rule 2a05:9406::8e8/125 {
      static
   }
}

When Solicitation comes, ndppd answers it:

# strace -e trace=sendmsg -f -xx -- ndppd -vvv
(notice) ndppd (NDP Proxy Daemon) version 0.2.5
(notice) Using configuration file '/etc/ndppd.conf'
(debug) {
(debug)     proxy ens3 {
(debug)         promiscuous yes 
(debug)         router yes 
(debug)         rule 2a05:9406::8e8/125 {
(debug)             static 
(debug)         }
(debug)         timeout 500 
(debug)         ttl 30000 
(debug)     }
(debug)     route-ttl 30000 
(debug) }
(debug) fd=3, hwaddr=52:54:0:8d:b6:a5
(debug) iface::allmulti() state=1, _name="ens3"
(debug) iface::promiscuous() state=1, _name="ens3"
(debug) proxy::create() if=ens3
(debug) rule::create() if=ens3, addr=2a05:9406::8e8/125, auto=no
(debug) iface ens3 {
(debug)   proxy c015a870 {
(debug)     rule c015a980 {
(debug)       taddr 2a05:9406::8e8/125;
(debug)       static;
(debug)     }
(debug)   }
(debug)   parents {
(debug)   }
(debug) }
(debug) iface::fixup_pollfds() _map.size()=1
(debug) iface::read() ifa=ens3, len=86
(debug) iface::read_solicit() saddr=2a05:9406::da, daddr=ff02::1:ff00:1, taddr=2a05:9406::1, len=86
(debug) proxy::handle_reverse_advert()
(debug) proxy::handle_solicit()
(debug) checking 2a05:9406::8e8/125 against 2a05:9406::1
(debug) iface::read() ifa=ens3, len=86
(debug) iface::read_solicit() saddr=2a05:9406::8c9, daddr=ff02::1:ff00:1, taddr=2a05:9406::1, len=86
(debug) proxy::handle_reverse_advert()
(debug) proxy::handle_solicit()
(debug) checking 2a05:9406::8e8/125 against 2a05:9406::1
(debug) iface::read() ifa=ens3, len=86
(debug) iface::read_solicit() saddr=2a0a:8c44::51, daddr=ff02::1:ff00:1, taddr=2a0a:8c44::1, len=86
(debug) proxy::handle_reverse_advert()
(debug) proxy::handle_solicit()
(debug) checking 2a05:9406::8e8/125 against 2a0a:8c44::1
(debug) iface::read() ifa=ens3, len=86
(debug) iface::read_solicit() saddr=fe80::ca13:3700:290:5760, daddr=ff02::1:ff00:8eb, taddr=2a05:9406::8eb, len=86
(debug) proxy::handle_reverse_advert()
(debug) proxy::handle_solicit()
(debug) checking 2a05:9406::8e8/125 against 2a05:9406::8eb
(debug) session::create() pr=c015a870, proxy=ens3, taddr=2a05:9406::8eb =c015ab90
(debug) session::handle_advert() taddr=2a05:9406::8eb, ttl=30000
(debug) session is active [taddr=2a05:9406::8eb]
(debug) iface::write_advert() daddr=fe80::ca13:3700:290:5760, taddr=2a05:9406::8eb
(debug) iface::write() ifa=ens3, daddr=fe80::ca13:3700:290:5760, len=32
sendmsg(3, {msg_name={sa_family=AF_INET6, sin6_port=htons(58), sin6_flowinfo=htonl(0), inet_pton(AF_INET6, "\x66\x65\x38\x30\x3a\x3a\x63\x61\x31\x33\x3a\x33\x37\x30\x30\x3a\x32\x39\x30\x3a\x35\x37\x36\x30", &sin6_addr), sin6_scope_id=0}, msg_namelen=28, msg_iov=[{iov_base="\x88\x00\x00\x00\xc0\x00\x00\x00\x2a\x05\x94\x06\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\xeb\x02\x01\x52\x54\x00\x8d\xb6\xa5", iov_len=32}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, 0) = 32
(debug) session::~session() this=c015ab90
(debug) iface::read() ifa=ens3, len=86
(debug) iface::read_solicit() saddr=fe80::ca13:3700:290:5760, daddr=ff02::1:ff00:8eb, taddr=2a05:9406::8eb, len=86
(debug) proxy::handle_reverse_advert()
(debug) proxy::handle_solicit()
(debug) checking 2a05:9406::8e8/125 against 2a05:9406::8eb
(debug) session::create() pr=c015a870, proxy=ens3, taddr=2a05:9406::8eb =c015ab90
(debug) session::handle_advert() taddr=2a05:9406::8eb, ttl=30000
(debug) session is active [taddr=2a05:9406::8eb]
(debug) iface::write_advert() daddr=fe80::ca13:3700:290:5760, taddr=2a05:9406::8eb
(debug) iface::write() ifa=ens3, daddr=fe80::ca13:3700:290:5760, len=32
sendmsg(3, {msg_name={sa_family=AF_INET6, sin6_port=htons(58), sin6_flowinfo=htonl(0), inet_pton(AF_INET6, "\x66\x65\x38\x30\x3a\x3a\x63\x61\x31\x33\x3a\x33\x37\x30\x30\x3a\x32\x39\x30\x3a\x35\x37\x36\x30", &sin6_addr), sin6_scope_id=0}, msg_namelen=28, msg_iov=[{iov_base="\x88\x00\x00\x00\xc0\x00\x00\x00\x2a\x05\x94\x06\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08\xeb\x02\x01\x52\x54\x00\x8d\xb6\xa5", iov_len=32}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, 0) = 32
(debug) session::~session() this=c015ab90
(debug) iface::read() ifa=ens3, len=86
(debug) iface::read_solicit() saddr=2a05:9406::62, daddr=ff02::1:ff00:1, taddr=2a05:9406::1, len=86
(debug) proxy::handle_reverse_advert()
(debug) proxy::handle_solicit()
(debug) checking 2a05:9406::8e8/125 against 2a05:9406::1

But absolutely nothing happens! No advertisement packet anywhere.

@shamefulCake1
Copy link

Wow, ValdikSS himself in the comments.

Try the release branch, master seems to be broken.

And you probably need proxying on both of your interfaces.

@ValdikSS
Copy link

ValdikSS commented Oct 4, 2024

Uhh, all right, in my case there is a bunch of issues combined:

  1. Hosting provider infrastructure expects ND only from global address, link-local sources for ND are filtered. However, NDs from hosting provider infrastructure are sent from link-local addresses as a source.
  2. ndppd sends ND from link-local address, always, thus it always fails to send NA in a way that hosting provider infrastructure would accept.
  3. ndppd uses AF_INET6 socket without Ethernet header filled to send ND, this means no destination MAC address is "known" beforehand.

Now the issues:

  1. (the obvious one) ndppd sends NA replies from link-local address, so it never works in my hosting provider infrastructure.
  2. If the hosting provider gateway MAC address is stalled and should be re-learned (if there's no IPv6 traffic for some time), if the packet is sent not by the host itself but the traffic is forwarded, Linux kernel sends ND to discover the gateway from link-local address of the host (instead of the global address if done directly from the host).
    In this case, there's no ND from the hosting provider for the proxied address, since the proxied client haven't even sent any packet to begin with, as the gateway is not yet discovered.
  3. If the ND sender's MAC address is not in the neigh cache (that's not necessary the gateway), it should be discovered by IP address first. That means when ndppd sends NA reply using its AF_INET6 socket without Ethernet header, the kernel tries to discover destination MAC first, using link-local address, and fails to do so!. The NA packet is never sent (despite sendmsg proper return size value)! That's why I did not see any outgoing packets in tcpdump.

@yoursunny's ndpresponder solves the gateway issue by defining the gateway MAC as NUD=NOARP on start, so it never expires and never needs to be rediscovered.

For the issue №3, ndppd with patch #86 uses PF_PACKET with filled Ethernet header, which fixes the issue. Ndpresponder does the same.

To sum up, to fix this issue, ndppd should:

  1. Prefer global address when sending NAs instead of link-local address
  2. Fill-in Ethernet header with destination MAC when sending NAs (send NA using raw ETH_P_IPV6 packet API #86)
  3. Set the gateway as static, either using NOARP or PERMANENT

@shamefulCake1
Copy link

  1. Hosting provider infrastructure expects ND only from global address, link-local sources for ND are filtered. However, NDs from hosting provider infrastructure are sent from link-local addresses as a source.

I think this is a complete breakage of all ipv6 conventions over there. In ipv6 all same-segment communication should be done using link-local addresses, and never using routable addresses. Possibly as a protection against global routing (unintentional) of network maintenance traffic. Correct me if I am wrong.

@ValdikSS
Copy link

ValdikSS commented Oct 5, 2024

This is an anti-spoofing measure implemented by some of virtualization software by default.
My hosting provider uses vmmanager, @yoursunny has an article where he mentions Virtualizor: https://yoursunny.com/t/2021/ndpresponder/

@shamefulCake1
Copy link

Well, this is not the first time when wonderful egalitarian ideas of IPv6 are getting crushed by the grim reality.

In many coffee shops near me IPv6 is implemented by serving ULAs and doing a /64 NAT to a single public address.

In any case, I don't see how this "technique" is anti-spoofing in any sense, because what would you be spoofing to your VM manager? The VM manager knows everything about you.

@SpareSimian
Copy link

A new RFC to delegate prefixes to endpoints that host containers: https://www.rfc-editor.org/info/rfc9663
Reddit discussion where I spotted it: https://www.reddit.com/r/ipv6/comments/1fygyl2/new_rfc_for_dhcpv6pd_to_endpoints/

@project0
Copy link

Thanks @ValdikSS for sharing ndpresponder, my VPS just stopped working a couple of days ago and i couldnt figure out what was suddenly wrong 😬. I guess my hoster changes something under the hood.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants