Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mDNS is using the wrong / only one interface with IPv6 on Linux #5783

Open
T-X opened this issue Dec 30, 2024 · 5 comments
Open

mDNS is using the wrong / only one interface with IPv6 on Linux #5783

T-X opened this issue Dec 30, 2024 · 5 comments
Labels
need/author-input Needs input from the original author

Comments

@T-X
Copy link

T-X commented Dec 30, 2024

Summary

Problem

We tried using Qaul.net, which uses rust-libp2p, via mDNS with IPv6 enabled through the following commit/branch: qaul/qaul.net@main...mdns_IPv6

We observed the issue on a desktop Linux/Debian (Sid) that rust-libp2p uses only one interface for mDNS with IPv6. More specifically it uses the first interface found in the local routing table, here htc0:

$ ip -6 route show table local
...
multicast ff00::/8 dev htc0 proto kernel metric 256 pref medium
multicast ff00::/8 dev enx3c18a01499aa proto kernel metric 256 pref medium
multicast ff00::/8 dev wlp1s0 proto kernel metric 256 pref medium

So Linux installs one IPv6 multicast route per interface with a long prefix length, all having the same priority. Linux then just chooses the first one, here htc0, which is typically not the one we are interested in. The decision can then be monitored/seen here:

$ ip maddr show dev htc0
7:	htc0
...
	link  33:33:00:00:00:fb users 2
...
	inet6 ff02::fb
...
$ ip maddr show dev wlp1s0
[empty]

We could verify that the routing table influences the decision. For instance if we install, as a workaround, a more specific route as follows then we do see mDNS v6 on the desired wlp1s0 WLAN interface from rust-libp2p:

$ ip -6 route add ff02::fb/128 dev wlp1s0 table local

It seems that rust-libp2p does not enforce a specific interface in its socket.join_multicast_v6() call in protocols/mdns/src/behaviour/iface.rs. The interface ID is set to 0 there in its call, leaving the decision to the OS.

Note that a IPV6_ADD_MEMBERSHIP / IPV6_JOIN_GROUP might behave a bit counter intuitively / different compared to a socket that binds to 0.0.0.0/::. Such a socket for unicast would receive from all interfaces. For the multicast join however even with a zero/unspecified interface ID only one interface is chosen, instead of all.

Solution/Suggestion

rust-libp2p should call/maintain join_multicast_v6 for each interface.

(Or could alternatively use/try a OS provided API if available first. avahi-daemon for instance already correctly installs an mDNS listener on all available interfaces and performs the IPV6_JOIN_GROUP on all of them. And rust-libp2p could probably register its service to avahi-daemon.)

Expected behavior

mDNS with IPv6 should work on all interfaces by default.

Actual behavior

Only the first interface matching in the IPv6 "local" routing table on Linux is used for mDNS v6 in rust-libp2p.

Relevant log output

No response

Possible Solution

No response

Version

last version

Would you like to work on fixing this bug?

Yes

@elenaf9
Copy link
Contributor

elenaf9 commented Jan 4, 2025

rust-libp2p should call/maintain join_multicast_v6 for each interface.

It already calls join_multicast_v6 each time a new address is discovered, however it then each time sets the interface_index to 0 (i.e. "any interface").
I am not super familiar with ipv6 multicast, and I guess it is also OS dependent, but I assume the OS then still always selects the same interface? I wonder if we can somewhere get the interface index for a newly reported "up" interface. Do you know how avahi-daemon is doing it?

@jxs jxs added the need/author-input Needs input from the original author label Jan 28, 2025
Copy link

github-actions bot commented Feb 4, 2025

Oops, seems like we needed more information for this issue, please comment with more details or this issue will be closed in 7 days.

@T-X
Copy link
Author

T-X commented Feb 4, 2025

@elenaf9 as far as I can tell avahi-daemon watches for netlink events on Linux, RTM_NEWLINK, RTM_DELLINK, RTM_NEWADDR and RTM_DELADDR in particular: https://github.com/avahi/avahi/blob/master/avahi-core/iface-linux.c#L66. Which then calls avahi_{,hw_}interface_check_relevant() -> interface_mdns_mcast_join() -> avahi_mdns_mcast_join_ipv{4,6}(): https://github.com/avahi/avahi/blob/master/avahi-core/socket.c#L143.

The netlink callback will provide the interface index on these events, ifinfomsg->ifi_index (NLMSG_DATA(n)->ifi_index) for RTM_NEWLINK/RTM_DELLINK and ifaddrmsg->ifa_index (NLMSG_DATA(n)->ifa_index) for RTM_NEWADDR/RTM_DELADDR.

It also seems like avahi-daemon is doing its own bookkeeping of network interfaces and which addresses belong to each.

@T-X
Copy link
Author

T-X commented Feb 4, 2025

And it seems like the if-watcher library, which libp2p seems to use, only returns an address right now, without an interface (index): https://docs.rs/if-watch/latest/if_watch/enum.IfEvent.html.

But it seems if-watcher also uses netlink / RTM_NEWADDR on Linux (sorry, I'm not good at reading Rust code yet :D) ? https://github.com/libp2p/if-watch/blob/master/src/linux.rs#L66. So it might be possible to also get and propagate the interface index from there?

@T-X
Copy link
Author

T-X commented Feb 4, 2025

And one more note / potential pitfall: It seems like avahi-daemon only joins with one IPv4/IPv6 address on a specific interface, it checks i->mcast_joined before joining.

And then it prefers an IPv6 address of global scope: https://github.com/avahi/avahi/blob/master/avahi-core/iface.c#L192. Which might be a rough, manual (incomplete?) approach to follow RFC6724?

@poettering had also replied this to me ages ago (I think I was also confused why avahi-daemon would use a global-scope source IPv6 address instead of a link-local one, when ff02::fd for mDNS is clearly a multicast destination of link-local scope, the "02" in ff02::fd): "Avahi will always announce the "best" address it can find on each
interface. Meaning that global addresses are generally preferred over
link-local ones." <- https://avahi.freedesktop.narkive.com/1pxEq5mx/general-usage-questions

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
need/author-input Needs input from the original author
Projects
None yet
Development

No branches or pull requests

3 participants