* [PATCH 1/2 net v2] ipv4: nexthop: avoid duplicate NHA_HW_STATS_ENABLE on nexthop group dump
@ 2026-04-01 8:17 Fernando Fernandez Mancera
2026-04-01 8:17 ` [PATCH 2/2 net v2] ipv4: nexthop: allocate skb dynamically in rtm_get_nexthop() Fernando Fernandez Mancera
` (2 more replies)
0 siblings, 3 replies; 9+ messages in thread
From: Fernando Fernandez Mancera @ 2026-04-01 8:17 UTC (permalink / raw)
To: netdev
Cc: idosch, petrm, horms, pabeni, kuba, edumazet, davem, dsahern,
kees, Fernando Fernandez Mancera
Currently NHA_HW_STATS_ENABLE is included twice everytime a dump of
nexthop group is performed with NHA_OP_FLAG_DUMP_STATS. As all the stats
querying were moved to nla_put_nh_group_stats(), leave only that
instance of the attribute querying.
Fixes: 5072ae00aea4 ("net: nexthop: Expose nexthop group HW stats to user space")
Signed-off-by: Fernando Fernandez Mancera <fmancera@suse.de>
---
v2: patch added on this revision
---
net/ipv4/nexthop.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/net/ipv4/nexthop.c b/net/ipv4/nexthop.c
index c942f1282236..a0c694583299 100644
--- a/net/ipv4/nexthop.c
+++ b/net/ipv4/nexthop.c
@@ -902,8 +902,7 @@ static int nla_put_nh_group(struct sk_buff *skb, struct nexthop *nh,
goto nla_put_failure;
if (op_flags & NHA_OP_FLAG_DUMP_STATS &&
- (nla_put_u32(skb, NHA_HW_STATS_ENABLE, nhg->hw_stats) ||
- nla_put_nh_group_stats(skb, nh, op_flags)))
+ nla_put_nh_group_stats(skb, nh, op_flags))
goto nla_put_failure;
return 0;
--
2.53.0
^ permalink raw reply related [flat|nested] 9+ messages in thread* [PATCH 2/2 net v2] ipv4: nexthop: allocate skb dynamically in rtm_get_nexthop() 2026-04-01 8:17 [PATCH 1/2 net v2] ipv4: nexthop: avoid duplicate NHA_HW_STATS_ENABLE on nexthop group dump Fernando Fernandez Mancera @ 2026-04-01 8:17 ` Fernando Fernandez Mancera 2026-04-01 8:59 ` Eric Dumazet ` (2 more replies) 2026-04-01 8:53 ` [PATCH 1/2 net v2] ipv4: nexthop: avoid duplicate NHA_HW_STATS_ENABLE on nexthop group dump Eric Dumazet 2026-04-01 12:50 ` Ido Schimmel 2 siblings, 3 replies; 9+ messages in thread From: Fernando Fernandez Mancera @ 2026-04-01 8:17 UTC (permalink / raw) To: netdev Cc: idosch, petrm, horms, pabeni, kuba, edumazet, davem, dsahern, kees, Fernando Fernandez Mancera, Yiming Qian When querying a nexthop object via RTM_GETNEXTHOP, the kernel currently allocates a fixed-size skb using NLMSG_GOODSIZE. While sufficient for single nexthops and small Equal-Cost Multi-Path groups, this fixed allocation fails for large nexthop groups like 512 nexthops. This results in the following warning splat: WARNING: net/ipv4/nexthop.c:3395 at rtm_get_nexthop+0x176/0x1c0, CPU#20: rep/4608 [...] RIP: 0010:rtm_get_nexthop (net/ipv4/nexthop.c:3395) [...] Call Trace: <TASK> rtnetlink_rcv_msg (net/core/rtnetlink.c:6989) netlink_rcv_skb (net/netlink/af_netlink.c:2550) netlink_unicast (net/netlink/af_netlink.c:1319 net/netlink/af_netlink.c:1344) netlink_sendmsg (net/netlink/af_netlink.c:1894) ____sys_sendmsg (net/socket.c:721 net/socket.c:736 net/socket.c:2585) ___sys_sendmsg (net/socket.c:2641) __sys_sendmsg (net/socket.c:2671) do_syscall_64 (arch/x86/entry/syscall_64.c:63 arch/x86/entry/syscall_64.c:94) entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:130) </TASK> Fix this by allocating the size dynamically using nh_nlmsg_size() and using nlmsg_new(), this is consistent with nexthop_notify() behavior. In addition, adjust nh_nlmsg_size_grp() so it calculates the size needed based on flags passed. This cannot be reproduced via iproute2 as the group size is currently limited and the command fails as follows: addattr_l ERROR: message exceeded bound of 1048 Fixes: 430a049190de ("nexthop: Add support for nexthop groups") Reported-by: Yiming Qian <yimingqian591@gmail.com> Closes: https://lore.kernel.org/netdev/CAL_bE8Li2h4KO+AQFXW4S6Yb_u5X4oSKnkywW+LPFjuErhqELA@mail.gmail.com/ Signed-off-by: Fernando Fernandez Mancera <fmancera@suse.de> --- v2: adjust nh_nlmsg_size_grp() to handle size for stats and add symbols to the trace in commit message --- net/ipv4/nexthop.c | 35 +++++++++++++++++++++++++---------- 1 file changed, 25 insertions(+), 10 deletions(-) diff --git a/net/ipv4/nexthop.c b/net/ipv4/nexthop.c index a0c694583299..9abbf3989f23 100644 --- a/net/ipv4/nexthop.c +++ b/net/ipv4/nexthop.c @@ -1003,7 +1003,7 @@ static size_t nh_nlmsg_size_grp_res(struct nh_group *nhg) nla_total_size_64bit(8);/* NHA_RES_GROUP_UNBALANCED_TIME */ } -static size_t nh_nlmsg_size_grp(struct nexthop *nh) +static size_t nh_nlmsg_size_grp(struct nexthop *nh, u32 op_flags) { struct nh_group *nhg = rtnl_dereference(nh->nh_grp); size_t sz = sizeof(struct nexthop_grp) * nhg->num_nh; @@ -1013,6 +1013,21 @@ static size_t nh_nlmsg_size_grp(struct nexthop *nh) if (nhg->resilient) tot += nh_nlmsg_size_grp_res(nhg); + if (op_flags & NHA_OP_FLAG_DUMP_STATS) { + tot += nla_total_size(0) + /* NHA_GROUP_STATS */ + nla_total_size(4); /* NHA_HW_STATS_ENABLE */ + tot += nhg->num_nh * + (nla_total_size(0) + /* NHA_GROUP_STATS_ENTRY */ + nla_total_size(4) + /* NHA_GROUP_STATS_ENTRY_ID */ + nla_total_size_64bit(8)); /* NHA_GROUP_STATS_ENTRY_PACKETS */ + + if (op_flags & NHA_OP_FLAG_DUMP_HW_STATS) { + tot += nhg->num_nh * + nla_total_size_64bit(8); /* NHA_GROUP_STATS_ENTRY_PACKETS_HW */ + tot += nla_total_size(4); /* NHA_HW_STATS_USED */ + } + } + return tot; } @@ -1047,14 +1062,14 @@ static size_t nh_nlmsg_size_single(struct nexthop *nh) return sz; } -static size_t nh_nlmsg_size(struct nexthop *nh) +static size_t nh_nlmsg_size(struct nexthop *nh, u32 op_flags) { size_t sz = NLMSG_ALIGN(sizeof(struct nhmsg)); sz += nla_total_size(4); /* NHA_ID */ if (nh->is_group) - sz += nh_nlmsg_size_grp(nh) + + sz += nh_nlmsg_size_grp(nh, op_flags) + nla_total_size(4) + /* NHA_OP_FLAGS */ 0; else @@ -1070,7 +1085,7 @@ static void nexthop_notify(int event, struct nexthop *nh, struct nl_info *info) struct sk_buff *skb; int err = -ENOBUFS; - skb = nlmsg_new(nh_nlmsg_size(nh), gfp_any()); + skb = nlmsg_new(nh_nlmsg_size(nh, 0), gfp_any()); if (!skb) goto errout; @@ -3376,15 +3391,15 @@ static int rtm_get_nexthop(struct sk_buff *in_skb, struct nlmsghdr *nlh, if (err) return err; - err = -ENOBUFS; - skb = alloc_skb(NLMSG_GOODSIZE, GFP_KERNEL); - if (!skb) - goto out; - err = -ENOENT; nh = nexthop_find_by_id(net, id); if (!nh) - goto errout_free; + goto out; + + err = -ENOBUFS; + skb = nlmsg_new(nh_nlmsg_size(nh, op_flags), GFP_KERNEL); + if (!skb) + goto out; err = nh_fill_node(skb, nh, RTM_NEWNEXTHOP, NETLINK_CB(in_skb).portid, nlh->nlmsg_seq, 0, op_flags); -- 2.53.0 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH 2/2 net v2] ipv4: nexthop: allocate skb dynamically in rtm_get_nexthop() 2026-04-01 8:17 ` [PATCH 2/2 net v2] ipv4: nexthop: allocate skb dynamically in rtm_get_nexthop() Fernando Fernandez Mancera @ 2026-04-01 8:59 ` Eric Dumazet 2026-04-01 13:02 ` Ido Schimmel 2026-04-01 14:50 ` Fernando Fernandez Mancera 2 siblings, 0 replies; 9+ messages in thread From: Eric Dumazet @ 2026-04-01 8:59 UTC (permalink / raw) To: Fernando Fernandez Mancera Cc: netdev, idosch, petrm, horms, pabeni, kuba, davem, dsahern, kees, Yiming Qian On Wed, Apr 1, 2026 at 1:17 AM Fernando Fernandez Mancera <fmancera@suse.de> wrote: > > When querying a nexthop object via RTM_GETNEXTHOP, the kernel currently > allocates a fixed-size skb using NLMSG_GOODSIZE. While sufficient for > single nexthops and small Equal-Cost Multi-Path groups, this fixed > allocation fails for large nexthop groups like 512 nexthops. > > This results in the following warning splat: > > WARNING: net/ipv4/nexthop.c:3395 at rtm_get_nexthop+0x176/0x1c0, CPU#20: rep/4608 > [...] > RIP: 0010:rtm_get_nexthop (net/ipv4/nexthop.c:3395) > [...] > Call Trace: > <TASK> > rtnetlink_rcv_msg (net/core/rtnetlink.c:6989) > netlink_rcv_skb (net/netlink/af_netlink.c:2550) > netlink_unicast (net/netlink/af_netlink.c:1319 net/netlink/af_netlink.c:1344) > netlink_sendmsg (net/netlink/af_netlink.c:1894) > ____sys_sendmsg (net/socket.c:721 net/socket.c:736 net/socket.c:2585) > ___sys_sendmsg (net/socket.c:2641) > __sys_sendmsg (net/socket.c:2671) > do_syscall_64 (arch/x86/entry/syscall_64.c:63 arch/x86/entry/syscall_64.c:94) > entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:130) > </TASK> > > Fix this by allocating the size dynamically using nh_nlmsg_size() and > using nlmsg_new(), this is consistent with nexthop_notify() behavior. In > addition, adjust nh_nlmsg_size_grp() so it calculates the size needed > based on flags passed. > > This cannot be reproduced via iproute2 as the group size is currently > limited and the command fails as follows: > > addattr_l ERROR: message exceeded bound of 1048 > > Fixes: 430a049190de ("nexthop: Add support for nexthop groups") > Reported-by: Yiming Qian <yimingqian591@gmail.com> > Closes: https://lore.kernel.org/netdev/CAL_bE8Li2h4KO+AQFXW4S6Yb_u5X4oSKnkywW+LPFjuErhqELA@mail.gmail.com/ > Signed-off-by: Fernando Fernandez Mancera <fmancera@suse.de> Reviewed-by: Eric Dumazet <edumazet@google.com> Thanks ! ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 2/2 net v2] ipv4: nexthop: allocate skb dynamically in rtm_get_nexthop() 2026-04-01 8:17 ` [PATCH 2/2 net v2] ipv4: nexthop: allocate skb dynamically in rtm_get_nexthop() Fernando Fernandez Mancera 2026-04-01 8:59 ` Eric Dumazet @ 2026-04-01 13:02 ` Ido Schimmel 2026-04-01 13:24 ` Fernando Fernandez Mancera 2026-04-01 14:50 ` Fernando Fernandez Mancera 2 siblings, 1 reply; 9+ messages in thread From: Ido Schimmel @ 2026-04-01 13:02 UTC (permalink / raw) To: Fernando Fernandez Mancera Cc: netdev, petrm, horms, pabeni, kuba, edumazet, davem, dsahern, kees, Yiming Qian On Wed, Apr 01, 2026 at 10:17:41AM +0200, Fernando Fernandez Mancera wrote: > -static size_t nh_nlmsg_size_grp(struct nexthop *nh) > +static size_t nh_nlmsg_size_grp(struct nexthop *nh, u32 op_flags) > { > struct nh_group *nhg = rtnl_dereference(nh->nh_grp); > size_t sz = sizeof(struct nexthop_grp) * nhg->num_nh; > @@ -1013,6 +1013,21 @@ static size_t nh_nlmsg_size_grp(struct nexthop *nh) > if (nhg->resilient) > tot += nh_nlmsg_size_grp_res(nhg); > > + if (op_flags & NHA_OP_FLAG_DUMP_STATS) { > + tot += nla_total_size(0) + /* NHA_GROUP_STATS */ > + nla_total_size(4); /* NHA_HW_STATS_ENABLE */ > + tot += nhg->num_nh * > + (nla_total_size(0) + /* NHA_GROUP_STATS_ENTRY */ > + nla_total_size(4) + /* NHA_GROUP_STATS_ENTRY_ID */ > + nla_total_size_64bit(8)); /* NHA_GROUP_STATS_ENTRY_PACKETS */ > + > + if (op_flags & NHA_OP_FLAG_DUMP_HW_STATS) { > + tot += nhg->num_nh * > + nla_total_size_64bit(8); /* NHA_GROUP_STATS_ENTRY_PACKETS_HW */ > + tot += nla_total_size(4); /* NHA_HW_STATS_USED */ > + } > + } > + This looks correct > return tot; > } > > @@ -1047,14 +1062,14 @@ static size_t nh_nlmsg_size_single(struct nexthop *nh) > return sz; > } > > -static size_t nh_nlmsg_size(struct nexthop *nh) > +static size_t nh_nlmsg_size(struct nexthop *nh, u32 op_flags) > { > size_t sz = NLMSG_ALIGN(sizeof(struct nhmsg)); > > sz += nla_total_size(4); /* NHA_ID */ > > if (nh->is_group) > - sz += nh_nlmsg_size_grp(nh) + > + sz += nh_nlmsg_size_grp(nh, op_flags) + > nla_total_size(4) + /* NHA_OP_FLAGS */ > 0; But the AI review [1] also mentions missing accounting for NHA_FDB which seems like a legit issue (even if we can't currently trigger it). In the single nexthop case we have: sz = nla_total_size(4); /* NHA_OIF */ Which covers NHA_FDB since it's mutually exclusive with NHA_OIF. [1] https://sashiko.dev/#/patchset/20260401081741.4273-1-fmancera%40suse.de ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 2/2 net v2] ipv4: nexthop: allocate skb dynamically in rtm_get_nexthop() 2026-04-01 13:02 ` Ido Schimmel @ 2026-04-01 13:24 ` Fernando Fernandez Mancera 0 siblings, 0 replies; 9+ messages in thread From: Fernando Fernandez Mancera @ 2026-04-01 13:24 UTC (permalink / raw) To: Ido Schimmel Cc: netdev, petrm, horms, pabeni, kuba, edumazet, davem, dsahern, kees, Yiming Qian On 4/1/26 3:02 PM, Ido Schimmel wrote: > On Wed, Apr 01, 2026 at 10:17:41AM +0200, Fernando Fernandez Mancera wrote: >> -static size_t nh_nlmsg_size_grp(struct nexthop *nh) >> +static size_t nh_nlmsg_size_grp(struct nexthop *nh, u32 op_flags) >> { >> struct nh_group *nhg = rtnl_dereference(nh->nh_grp); >> size_t sz = sizeof(struct nexthop_grp) * nhg->num_nh; >> @@ -1013,6 +1013,21 @@ static size_t nh_nlmsg_size_grp(struct nexthop *nh) >> if (nhg->resilient) >> tot += nh_nlmsg_size_grp_res(nhg); >> >> + if (op_flags & NHA_OP_FLAG_DUMP_STATS) { >> + tot += nla_total_size(0) + /* NHA_GROUP_STATS */ >> + nla_total_size(4); /* NHA_HW_STATS_ENABLE */ >> + tot += nhg->num_nh * >> + (nla_total_size(0) + /* NHA_GROUP_STATS_ENTRY */ >> + nla_total_size(4) + /* NHA_GROUP_STATS_ENTRY_ID */ >> + nla_total_size_64bit(8)); /* NHA_GROUP_STATS_ENTRY_PACKETS */ >> + >> + if (op_flags & NHA_OP_FLAG_DUMP_HW_STATS) { >> + tot += nhg->num_nh * >> + nla_total_size_64bit(8); /* NHA_GROUP_STATS_ENTRY_PACKETS_HW */ >> + tot += nla_total_size(4); /* NHA_HW_STATS_USED */ >> + } >> + } >> + > > This looks correct > >> return tot; >> } >> >> @@ -1047,14 +1062,14 @@ static size_t nh_nlmsg_size_single(struct nexthop *nh) >> return sz; >> } >> >> -static size_t nh_nlmsg_size(struct nexthop *nh) >> +static size_t nh_nlmsg_size(struct nexthop *nh, u32 op_flags) >> { >> size_t sz = NLMSG_ALIGN(sizeof(struct nhmsg)); >> >> sz += nla_total_size(4); /* NHA_ID */ >> >> if (nh->is_group) >> - sz += nh_nlmsg_size_grp(nh) + >> + sz += nh_nlmsg_size_grp(nh, op_flags) + >> nla_total_size(4) + /* NHA_OP_FLAGS */ >> 0; > > But the AI review [1] also mentions missing accounting for NHA_FDB which > seems like a legit issue (even if we can't currently trigger it). > > In the single nexthop case we have: > > sz = nla_total_size(4); /* NHA_OIF */ > > Which covers NHA_FDB since it's mutually exclusive with NHA_OIF. > Yes, seems legit. Probably the issue cannot be reproduced because some padding might be absorbing the impact. Thank you Ido for pointing it out. I guess it makes sense to fix this on a v3 instead of a separate patch. Even though, this has been affecting nexthop_notify() path since group support was implemented. Thanks, Fernando. > [1] https://sashiko.dev/#/patchset/20260401081741.4273-1-fmancera%40suse.de ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 2/2 net v2] ipv4: nexthop: allocate skb dynamically in rtm_get_nexthop() 2026-04-01 8:17 ` [PATCH 2/2 net v2] ipv4: nexthop: allocate skb dynamically in rtm_get_nexthop() Fernando Fernandez Mancera 2026-04-01 8:59 ` Eric Dumazet 2026-04-01 13:02 ` Ido Schimmel @ 2026-04-01 14:50 ` Fernando Fernandez Mancera 2026-04-02 0:28 ` Jakub Kicinski 2 siblings, 1 reply; 9+ messages in thread From: Fernando Fernandez Mancera @ 2026-04-01 14:50 UTC (permalink / raw) To: netdev Cc: idosch, petrm, horms, pabeni, kuba, edumazet, davem, dsahern, kees, Yiming Qian On 4/1/26 10:17 AM, Fernando Fernandez Mancera wrote: > When querying a nexthop object via RTM_GETNEXTHOP, the kernel currently > allocates a fixed-size skb using NLMSG_GOODSIZE. While sufficient for > single nexthops and small Equal-Cost Multi-Path groups, this fixed > allocation fails for large nexthop groups like 512 nexthops. > > This results in the following warning splat: > > WARNING: net/ipv4/nexthop.c:3395 at rtm_get_nexthop+0x176/0x1c0, CPU#20: rep/4608 > [...] > RIP: 0010:rtm_get_nexthop (net/ipv4/nexthop.c:3395) > [...] > Call Trace: > <TASK> > rtnetlink_rcv_msg (net/core/rtnetlink.c:6989) > netlink_rcv_skb (net/netlink/af_netlink.c:2550) > netlink_unicast (net/netlink/af_netlink.c:1319 net/netlink/af_netlink.c:1344) > netlink_sendmsg (net/netlink/af_netlink.c:1894) > ____sys_sendmsg (net/socket.c:721 net/socket.c:736 net/socket.c:2585) > ___sys_sendmsg (net/socket.c:2641) > __sys_sendmsg (net/socket.c:2671) > do_syscall_64 (arch/x86/entry/syscall_64.c:63 arch/x86/entry/syscall_64.c:94) > entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:130) > </TASK> > > Fix this by allocating the size dynamically using nh_nlmsg_size() and > using nlmsg_new(), this is consistent with nexthop_notify() behavior. In > addition, adjust nh_nlmsg_size_grp() so it calculates the size needed > based on flags passed. > > This cannot be reproduced via iproute2 as the group size is currently > limited and the command fails as follows: > > addattr_l ERROR: message exceeded bound of 1048 > > Fixes: 430a049190de ("nexthop: Add support for nexthop groups") > Reported-by: Yiming Qian <yimingqian591@gmail.com> > Closes: https://lore.kernel.org/netdev/CAL_bE8Li2h4KO+AQFXW4S6Yb_u5X4oSKnkywW+LPFjuErhqELA@mail.gmail.com/ > Signed-off-by: Fernando Fernandez Mancera <fmancera@suse.de> As Ido requested some changes, update the status in patchwork. -- pw-bot: cr ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 2/2 net v2] ipv4: nexthop: allocate skb dynamically in rtm_get_nexthop() 2026-04-01 14:50 ` Fernando Fernandez Mancera @ 2026-04-02 0:28 ` Jakub Kicinski 0 siblings, 0 replies; 9+ messages in thread From: Jakub Kicinski @ 2026-04-02 0:28 UTC (permalink / raw) To: Fernando Fernandez Mancera Cc: netdev, idosch, petrm, horms, pabeni, edumazet, davem, dsahern, kees, Yiming Qian On Wed, 1 Apr 2026 16:50:36 +0200 Fernando Fernandez Mancera wrote: > As Ido requested some changes, update the status in patchwork. Thanks but FWIW the idea is to toss it into your normal replies :) The bot should be able to dig thru the thread to find the patchwork series even if you reply deep in the conversation. Quoting documentation: Updating patch status ~~~~~~~~~~~~~~~~~~~~~ Contributors and reviewers do not have the permissions to update patch state directly in patchwork. Patchwork doesn't expose much information about the history of the state of patches, therefore having multiple people update the state leads to confusion. Instead of delegating patchwork permissions netdev uses a simple mail bot which looks for special commands/lines within the emails sent to the mailing list. For example to mark a series as Changes Requested one needs to send the following line anywhere in the email thread:: pw-bot: changes-requested As a result the bot will set the entire series to Changes Requested. This may be useful when author discovers a bug in their own series and wants to prevent it from getting applied. The use of the bot is entirely optional, if in doubt ignore its existence completely. Maintainers will classify and update the state of the patches themselves. No email should ever be sent to the list with the main purpose of communicating with the bot, the bot commands should be seen as metadata. The use of the bot is restricted to authors of the patches (the ``From:`` header on patch submission and command must match!), maintainers of the modified code according to the MAINTAINERS file (again, ``From:`` must match the MAINTAINERS entry) and a handful of senior reviewers. Bot records its activity here: https://netdev.bots.linux.dev/pw-bot.html See: https://www.kernel.org/doc/html/next/process/maintainer-netdev.html#updating-patch-status ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 1/2 net v2] ipv4: nexthop: avoid duplicate NHA_HW_STATS_ENABLE on nexthop group dump 2026-04-01 8:17 [PATCH 1/2 net v2] ipv4: nexthop: avoid duplicate NHA_HW_STATS_ENABLE on nexthop group dump Fernando Fernandez Mancera 2026-04-01 8:17 ` [PATCH 2/2 net v2] ipv4: nexthop: allocate skb dynamically in rtm_get_nexthop() Fernando Fernandez Mancera @ 2026-04-01 8:53 ` Eric Dumazet 2026-04-01 12:50 ` Ido Schimmel 2 siblings, 0 replies; 9+ messages in thread From: Eric Dumazet @ 2026-04-01 8:53 UTC (permalink / raw) To: Fernando Fernandez Mancera Cc: netdev, idosch, petrm, horms, pabeni, kuba, davem, dsahern, kees On Wed, Apr 1, 2026 at 1:17 AM Fernando Fernandez Mancera <fmancera@suse.de> wrote: > > Currently NHA_HW_STATS_ENABLE is included twice everytime a dump of > nexthop group is performed with NHA_OP_FLAG_DUMP_STATS. As all the stats > querying were moved to nla_put_nh_group_stats(), leave only that > instance of the attribute querying. > > Fixes: 5072ae00aea4 ("net: nexthop: Expose nexthop group HW stats to user space") > Signed-off-by: Fernando Fernandez Mancera <fmancera@suse.de> Reviewed-by: Eric Dumazet <edumazet@google.com> ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 1/2 net v2] ipv4: nexthop: avoid duplicate NHA_HW_STATS_ENABLE on nexthop group dump 2026-04-01 8:17 [PATCH 1/2 net v2] ipv4: nexthop: avoid duplicate NHA_HW_STATS_ENABLE on nexthop group dump Fernando Fernandez Mancera 2026-04-01 8:17 ` [PATCH 2/2 net v2] ipv4: nexthop: allocate skb dynamically in rtm_get_nexthop() Fernando Fernandez Mancera 2026-04-01 8:53 ` [PATCH 1/2 net v2] ipv4: nexthop: avoid duplicate NHA_HW_STATS_ENABLE on nexthop group dump Eric Dumazet @ 2026-04-01 12:50 ` Ido Schimmel 2 siblings, 0 replies; 9+ messages in thread From: Ido Schimmel @ 2026-04-01 12:50 UTC (permalink / raw) To: Fernando Fernandez Mancera Cc: netdev, petrm, horms, pabeni, kuba, edumazet, davem, dsahern, kees On Wed, Apr 01, 2026 at 10:17:40AM +0200, Fernando Fernandez Mancera wrote: > Currently NHA_HW_STATS_ENABLE is included twice everytime a dump of > nexthop group is performed with NHA_OP_FLAG_DUMP_STATS. As all the stats > querying were moved to nla_put_nh_group_stats(), leave only that > instance of the attribute querying. > > Fixes: 5072ae00aea4 ("net: nexthop: Expose nexthop group HW stats to user space") > Signed-off-by: Fernando Fernandez Mancera <fmancera@suse.de> Reviewed-by: Ido Schimmel <idosch@nvidia.com> ^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2026-04-02 0:28 UTC | newest] Thread overview: 9+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2026-04-01 8:17 [PATCH 1/2 net v2] ipv4: nexthop: avoid duplicate NHA_HW_STATS_ENABLE on nexthop group dump Fernando Fernandez Mancera 2026-04-01 8:17 ` [PATCH 2/2 net v2] ipv4: nexthop: allocate skb dynamically in rtm_get_nexthop() Fernando Fernandez Mancera 2026-04-01 8:59 ` Eric Dumazet 2026-04-01 13:02 ` Ido Schimmel 2026-04-01 13:24 ` Fernando Fernandez Mancera 2026-04-01 14:50 ` Fernando Fernandez Mancera 2026-04-02 0:28 ` Jakub Kicinski 2026-04-01 8:53 ` [PATCH 1/2 net v2] ipv4: nexthop: avoid duplicate NHA_HW_STATS_ENABLE on nexthop group dump Eric Dumazet 2026-04-01 12:50 ` Ido Schimmel
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox