From: Guillaume Nault <gnault@redhat.com>
To: Stanislav Fomichev <stfomichev@gmail.com>
Cc: David Miller <davem@davemloft.net>,
Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
Eric Dumazet <edumazet@google.com>,
netdev@vger.kernel.org, Simon Horman <horms@kernel.org>,
David Ahern <dsahern@kernel.org>,
Antonio Quartulli <antonio@mandelbit.com>,
Ido Schimmel <idosch@idosch.org>, Petr Machata <petrm@nvidia.com>
Subject: Re: [PATCH net v4 1/2] gre: Fix IPv6 link-local address generation.
Date: Fri, 14 Mar 2025 20:22:27 +0100 [thread overview]
Message-ID: <Z9SB87QzBbod1t7R@debian> (raw)
In-Reply-To: <Z9RIyKZDNoka53EO@mini-arch>
On Fri, Mar 14, 2025 at 08:18:32AM -0700, Stanislav Fomichev wrote:
> On 03/07, Guillaume Nault wrote:
> > Use addrconf_addr_gen() to generate IPv6 link-local addresses on GRE
> > devices in most cases and fall back to using add_v4_addrs() only in
> > case the GRE configuration is incompatible with addrconf_addr_gen().
> >
> > GRE used to use addrconf_addr_gen() until commit e5dd729460ca
> > ("ip/ip6_gre: use the same logic as SIT interfaces when computing v6LL
> > address") restricted this use to gretap and ip6gretap devices, and
> > created add_v4_addrs() (borrowed from SIT) for non-Ethernet GRE ones.
> >
> > The original problem came when commit 9af28511be10 ("addrconf: refuse
> > isatap eui64 for INADDR_ANY") made __ipv6_isatap_ifid() fail when its
> > addr parameter was 0. The commit says that this would create an invalid
> > address, however, I couldn't find any RFC saying that the generated
> > interface identifier would be wrong. Anyway, since gre over IPv4
> > devices pass their local tunnel address to __ipv6_isatap_ifid(), that
> > commit broke their IPv6 link-local address generation when the local
> > address was unspecified.
> >
> > Then commit e5dd729460ca ("ip/ip6_gre: use the same logic as SIT
> > interfaces when computing v6LL address") tried to fix that case by
> > defining add_v4_addrs() and calling it to generate the IPv6 link-local
> > address instead of using addrconf_addr_gen() (apart for gretap and
> > ip6gretap devices, which would still use the regular
> > addrconf_addr_gen(), since they have a MAC address).
> >
> > That broke several use cases because add_v4_addrs() isn't properly
> > integrated into the rest of IPv6 Neighbor Discovery code. Several of
> > these shortcomings have been fixed over time, but add_v4_addrs()
> > remains broken on several aspects. In particular, it doesn't send any
> > Router Sollicitations, so the SLAAC process doesn't start until the
> > interface receives a Router Advertisement. Also, add_v4_addrs() mostly
> > ignores the address generation mode of the interface
> > (/proc/sys/net/ipv6/conf/*/addr_gen_mode), thus breaking the
> > IN6_ADDR_GEN_MODE_RANDOM and IN6_ADDR_GEN_MODE_STABLE_PRIVACY cases.
> >
> > Fix the situation by using add_v4_addrs() only in the specific scenario
> > where the normal method would fail. That is, for interfaces that have
> > all of the following characteristics:
> >
> > * run over IPv4,
> > * transport IP packets directly, not Ethernet (that is, not gretap
> > interfaces),
> > * tunnel endpoint is INADDR_ANY (that is, 0),
> > * device address generation mode is EUI64.
>
> Could you please double check net/forwarding/ip6gre_custom_multipath_hash.sh ?
> It seems like it started falling after this series has been pulled:
> https://netdev-3.bots.linux.dev/vmksft-forwarding-dbg/results/31301/2-ip6gre-custom-multipath-hash-sh/stdout
Hum, net/forwarding/ip6gre_custom_multipath_hash.sh works for me on the
current net tree (I'm at commit 4003c9e78778). I have only one failure,
but it already happened before 183185a18ff9 ("gre: Fix IPv6 link-local
address generation.") was applied.
# ./ip6gre_custom_multipath_hash.sh
TEST: ping [ OK ]
TEST: ping6 [ OK ]
INFO: Running IPv4 overlay custom multipath hash tests
TEST: Multipath hash field: Inner source IP (balanced) [ OK ]
INFO: Packets sent on path1 / path2: 6350 / 6251
TEST: Multipath hash field: Inner source IP (unbalanced) [ OK ]
INFO: Packets sent on path1 / path2: 12602 / 0
TEST: Multipath hash field: Inner destination IP (balanced) [ OK ]
INFO: Packets sent on path1 / path2: 5400 / 7201
TEST: Multipath hash field: Inner destination IP (unbalanced) [ OK ]
INFO: Packets sent on path1 / path2: 0 / 12600
TEST: Multipath hash field: Inner source port (balanced) [ OK ]
INFO: Packets sent on path1 / path2: 16458 / 16311
TEST: Multipath hash field: Inner source port (unbalanced) [ OK ]
INFO: Packets sent on path1 / path2: 32769 / 0
TEST: Multipath hash field: Inner destination port (balanced) [ OK ]
INFO: Packets sent on path1 / path2: 16458 / 16311
TEST: Multipath hash field: Inner destination port (unbalanced) [ OK ]
INFO: Packets sent on path1 / path2: 0 / 32769
INFO: Running IPv6 overlay custom multipath hash tests
TEST: Multipath hash field: Inner source IP (balanced) [ OK ]
INFO: Packets sent on path1 / path2: 5900 / 6700
TEST: Multipath hash field: Inner source IP (unbalanced) [ OK ]
INFO: Packets sent on path1 / path2: 0 / 12600
TEST: Multipath hash field: Inner destination IP (balanced) [ OK ]
INFO: Packets sent on path1 / path2: 5900 / 6700
TEST: Multipath hash field: Inner destination IP (unbalanced) [ OK ]
INFO: Packets sent on path1 / path2: 12600 / 0
TEST: Multipath hash field: Inner flowlabel (balanced) [FAIL]
Expected traffic to be balanced, but it is not
INFO: Packets sent on path1 / path2: 0 / 1
TEST: Multipath hash field: Inner flowlabel (unbalanced) [ OK ]
INFO: Packets sent on path1 / path2: 0 / 12600
TEST: Multipath hash field: Inner source port (balanced) [ OK ]
INFO: Packets sent on path1 / path2: 16387 / 16385
TEST: Multipath hash field: Inner source port (unbalanced) [ OK ]
INFO: Packets sent on path1 / path2: 32770 / 0
TEST: Multipath hash field: Inner destination port (balanced) [ OK ]
INFO: Packets sent on path1 / path2: 16386 / 16384
TEST: Multipath hash field: Inner destination port (unbalanced) [ OK ]
INFO: Packets sent on path1 / path2: 32769 / 0
next prev parent reply other threads:[~2025-03-14 19:22 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-07 19:28 [PATCH net v4 0/2] gre: Fix regressions in IPv6 link-local address generation Guillaume Nault
2025-03-07 19:28 ` [PATCH net v4 1/2] gre: Fix " Guillaume Nault
2025-03-09 18:03 ` Ido Schimmel
2025-03-14 15:18 ` Stanislav Fomichev
2025-03-14 19:22 ` Guillaume Nault [this message]
2025-03-14 20:18 ` Stanislav Fomichev
2025-03-16 13:08 ` Ido Schimmel
2025-03-17 21:10 ` Guillaume Nault
2025-03-20 16:26 ` Simon Horman
2025-03-20 19:04 ` Ido Schimmel
2025-03-24 14:36 ` Guillaume Nault
2025-03-07 19:28 ` [PATCH net v4 2/2] selftests: Add IPv6 link-local address generation tests for GRE devices Guillaume Nault
2025-03-09 18:04 ` Ido Schimmel
2025-03-10 9:44 ` Petr Machata
2025-03-13 9:30 ` [PATCH net v4 0/2] gre: Fix regressions in IPv6 link-local address generation patchwork-bot+netdevbpf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z9SB87QzBbod1t7R@debian \
--to=gnault@redhat.com \
--cc=antonio@mandelbit.com \
--cc=davem@davemloft.net \
--cc=dsahern@kernel.org \
--cc=edumazet@google.com \
--cc=horms@kernel.org \
--cc=idosch@idosch.org \
--cc=kuba@kernel.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=petrm@nvidia.com \
--cc=stfomichev@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).