* [PATCH net] xsk: Bring back busy polling support
@ 2025-01-09 0:34 Stanislav Fomichev
2025-01-09 15:22 ` Magnus Karlsson
` (2 more replies)
0 siblings, 3 replies; 6+ messages in thread
From: Stanislav Fomichev @ 2025-01-09 0:34 UTC (permalink / raw)
To: netdev
Cc: davem, edumazet, kuba, pabeni, linux-kernel, bpf, horms, ast,
daniel, hawk, john.fastabend, bjorn, magnus.karlsson,
maciej.fijalkowski, jonathan.lemon, jdamato, mkarsten
Commit 86e25f40aa1e ("net: napi: Add napi_config") moved napi->napi_id
assignment to a later point in time (napi_hash_add_with_id). This breaks
__xdp_rxq_info_reg which copies napi_id at an earlier time and now
stores 0 napi_id. It also makes sk_mark_napi_id_once_xdp and
__sk_mark_napi_id_once useless because they now work against 0 napi_id.
Since sk_busy_loop requires valid napi_id to busy-poll on, there is no way
to busy-poll AF_XDP sockets anymore.
Bring back the ability to busy-poll on XSK by resolving socket's napi_id
at bind time. This relies on relatively recent netif_queue_set_napi,
but (assume) at this point most popular drivers should have been converted.
This also removes per-tx/rx cycles which used to check and/or set
the napi_id value.
Confirmed by running a busy-polling AF_XDP socket
(github.com/fomichev/xskrtt) on mlx5 and looking at BusyPollRxPackets
from /proc/net/netstat.
Fixes: 86e25f40aa1e ("net: napi: Add napi_config")
Signed-off-by: Stanislav Fomichev <sdf@fomichev.me>
---
include/net/busy_poll.h | 8 --------
include/net/xdp.h | 1 -
include/net/xdp_sock_drv.h | 14 --------------
net/core/xdp.c | 1 -
net/xdp/xsk.c | 14 +++++++++-----
5 files changed, 9 insertions(+), 29 deletions(-)
diff --git a/include/net/busy_poll.h b/include/net/busy_poll.h
index c858270141bc..c39a426ebf52 100644
--- a/include/net/busy_poll.h
+++ b/include/net/busy_poll.h
@@ -174,12 +174,4 @@ static inline void sk_mark_napi_id_once(struct sock *sk,
#endif
}
-static inline void sk_mark_napi_id_once_xdp(struct sock *sk,
- const struct xdp_buff *xdp)
-{
-#ifdef CONFIG_NET_RX_BUSY_POLL
- __sk_mark_napi_id_once(sk, xdp->rxq->napi_id);
-#endif
-}
-
#endif /* _LINUX_NET_BUSY_POLL_H */
diff --git a/include/net/xdp.h b/include/net/xdp.h
index e6770dd40c91..b5b10f2b88e5 100644
--- a/include/net/xdp.h
+++ b/include/net/xdp.h
@@ -62,7 +62,6 @@ struct xdp_rxq_info {
u32 queue_index;
u32 reg_state;
struct xdp_mem_info mem;
- unsigned int napi_id;
u32 frag_size;
} ____cacheline_aligned; /* perf critical, avoid false-sharing */
diff --git a/include/net/xdp_sock_drv.h b/include/net/xdp_sock_drv.h
index 40085afd9160..7a7316d9c0da 100644
--- a/include/net/xdp_sock_drv.h
+++ b/include/net/xdp_sock_drv.h
@@ -59,15 +59,6 @@ static inline void xsk_pool_fill_cb(struct xsk_buff_pool *pool,
xp_fill_cb(pool, desc);
}
-static inline unsigned int xsk_pool_get_napi_id(struct xsk_buff_pool *pool)
-{
-#ifdef CONFIG_NET_RX_BUSY_POLL
- return pool->heads[0].xdp.rxq->napi_id;
-#else
- return 0;
-#endif
-}
-
static inline void xsk_pool_dma_unmap(struct xsk_buff_pool *pool,
unsigned long attrs)
{
@@ -306,11 +297,6 @@ static inline void xsk_pool_fill_cb(struct xsk_buff_pool *pool,
{
}
-static inline unsigned int xsk_pool_get_napi_id(struct xsk_buff_pool *pool)
-{
- return 0;
-}
-
static inline void xsk_pool_dma_unmap(struct xsk_buff_pool *pool,
unsigned long attrs)
{
diff --git a/net/core/xdp.c b/net/core/xdp.c
index bcc5551c6424..2315feed94ef 100644
--- a/net/core/xdp.c
+++ b/net/core/xdp.c
@@ -186,7 +186,6 @@ int __xdp_rxq_info_reg(struct xdp_rxq_info *xdp_rxq,
xdp_rxq_info_init(xdp_rxq);
xdp_rxq->dev = dev;
xdp_rxq->queue_index = queue_index;
- xdp_rxq->napi_id = napi_id;
xdp_rxq->frag_size = frag_size;
xdp_rxq->reg_state = REG_STATE_REGISTERED;
diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
index 3fa70286c846..89d2bef96469 100644
--- a/net/xdp/xsk.c
+++ b/net/xdp/xsk.c
@@ -322,7 +322,6 @@ static int xsk_rcv_check(struct xdp_sock *xs, struct xdp_buff *xdp, u32 len)
return -ENOSPC;
}
- sk_mark_napi_id_once_xdp(&xs->sk, xdp);
return 0;
}
@@ -908,11 +907,8 @@ static int __xsk_sendmsg(struct socket *sock, struct msghdr *m, size_t total_len
if (unlikely(!xs->tx))
return -ENOBUFS;
- if (sk_can_busy_loop(sk)) {
- if (xs->zc)
- __sk_mark_napi_id_once(sk, xsk_pool_get_napi_id(xs->pool));
+ if (sk_can_busy_loop(sk))
sk_busy_loop(sk, 1); /* only support non-blocking sockets */
- }
if (xs->zc && xsk_no_wakeup(sk))
return 0;
@@ -1298,6 +1294,14 @@ static int xsk_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
xs->queue_id = qid;
xp_add_xsk(xs->pool, xs);
+ if (xs->zc && qid < dev->real_num_rx_queues) {
+ struct netdev_rx_queue *rxq;
+
+ rxq = __netif_get_rx_queue(dev, qid);
+ if (rxq->napi)
+ __sk_mark_napi_id_once(sk, rxq->napi->napi_id);
+ }
+
out_unlock:
if (err) {
dev_put(dev);
--
2.47.1
^ permalink raw reply related [flat|nested] 6+ messages in thread* Re: [PATCH net] xsk: Bring back busy polling support
2025-01-09 0:34 [PATCH net] xsk: Bring back busy polling support Stanislav Fomichev
@ 2025-01-09 15:22 ` Magnus Karlsson
2025-01-09 16:43 ` Jakub Kicinski
2025-01-09 17:36 ` Joe Damato
2025-01-09 17:32 ` Joe Damato
2025-01-11 2:20 ` patchwork-bot+netdevbpf
2 siblings, 2 replies; 6+ messages in thread
From: Magnus Karlsson @ 2025-01-09 15:22 UTC (permalink / raw)
To: Stanislav Fomichev
Cc: netdev, davem, edumazet, kuba, pabeni, linux-kernel, bpf, horms,
ast, daniel, hawk, john.fastabend, bjorn, magnus.karlsson,
maciej.fijalkowski, jonathan.lemon, jdamato, mkarsten
On Thu, 9 Jan 2025 at 01:35, Stanislav Fomichev <sdf@fomichev.me> wrote:
>
> Commit 86e25f40aa1e ("net: napi: Add napi_config") moved napi->napi_id
> assignment to a later point in time (napi_hash_add_with_id). This breaks
> __xdp_rxq_info_reg which copies napi_id at an earlier time and now
> stores 0 napi_id. It also makes sk_mark_napi_id_once_xdp and
> __sk_mark_napi_id_once useless because they now work against 0 napi_id.
> Since sk_busy_loop requires valid napi_id to busy-poll on, there is no way
> to busy-poll AF_XDP sockets anymore.
>
> Bring back the ability to busy-poll on XSK by resolving socket's napi_id
> at bind time. This relies on relatively recent netif_queue_set_napi,
> but (assume) at this point most popular drivers should have been converted.
> This also removes per-tx/rx cycles which used to check and/or set
> the napi_id value.
>
> Confirmed by running a busy-polling AF_XDP socket
> (github.com/fomichev/xskrtt) on mlx5 and looking at BusyPollRxPackets
> from /proc/net/netstat.
Thanks Stanislav for finding and fixing this. As a bonus, the
resulting code is much nicer too.
I just took a look at the Intel drivers and some of our drivers have
not been converted to use netif_queue_set_napi() yet. Just ice, e1000,
and e1000e use it. But that is on us to fix.
From the xsk point of view:
Acked-by: Magnus Karlsson <magnus.karlsson@intel.com>
> Fixes: 86e25f40aa1e ("net: napi: Add napi_config")
> Signed-off-by: Stanislav Fomichev <sdf@fomichev.me>
> ---
> include/net/busy_poll.h | 8 --------
> include/net/xdp.h | 1 -
> include/net/xdp_sock_drv.h | 14 --------------
> net/core/xdp.c | 1 -
> net/xdp/xsk.c | 14 +++++++++-----
> 5 files changed, 9 insertions(+), 29 deletions(-)
>
> diff --git a/include/net/busy_poll.h b/include/net/busy_poll.h
> index c858270141bc..c39a426ebf52 100644
> --- a/include/net/busy_poll.h
> +++ b/include/net/busy_poll.h
> @@ -174,12 +174,4 @@ static inline void sk_mark_napi_id_once(struct sock *sk,
> #endif
> }
>
> -static inline void sk_mark_napi_id_once_xdp(struct sock *sk,
> - const struct xdp_buff *xdp)
> -{
> -#ifdef CONFIG_NET_RX_BUSY_POLL
> - __sk_mark_napi_id_once(sk, xdp->rxq->napi_id);
> -#endif
> -}
> -
> #endif /* _LINUX_NET_BUSY_POLL_H */
> diff --git a/include/net/xdp.h b/include/net/xdp.h
> index e6770dd40c91..b5b10f2b88e5 100644
> --- a/include/net/xdp.h
> +++ b/include/net/xdp.h
> @@ -62,7 +62,6 @@ struct xdp_rxq_info {
> u32 queue_index;
> u32 reg_state;
> struct xdp_mem_info mem;
> - unsigned int napi_id;
> u32 frag_size;
> } ____cacheline_aligned; /* perf critical, avoid false-sharing */
>
> diff --git a/include/net/xdp_sock_drv.h b/include/net/xdp_sock_drv.h
> index 40085afd9160..7a7316d9c0da 100644
> --- a/include/net/xdp_sock_drv.h
> +++ b/include/net/xdp_sock_drv.h
> @@ -59,15 +59,6 @@ static inline void xsk_pool_fill_cb(struct xsk_buff_pool *pool,
> xp_fill_cb(pool, desc);
> }
>
> -static inline unsigned int xsk_pool_get_napi_id(struct xsk_buff_pool *pool)
> -{
> -#ifdef CONFIG_NET_RX_BUSY_POLL
> - return pool->heads[0].xdp.rxq->napi_id;
> -#else
> - return 0;
> -#endif
> -}
> -
> static inline void xsk_pool_dma_unmap(struct xsk_buff_pool *pool,
> unsigned long attrs)
> {
> @@ -306,11 +297,6 @@ static inline void xsk_pool_fill_cb(struct xsk_buff_pool *pool,
> {
> }
>
> -static inline unsigned int xsk_pool_get_napi_id(struct xsk_buff_pool *pool)
> -{
> - return 0;
> -}
> -
> static inline void xsk_pool_dma_unmap(struct xsk_buff_pool *pool,
> unsigned long attrs)
> {
> diff --git a/net/core/xdp.c b/net/core/xdp.c
> index bcc5551c6424..2315feed94ef 100644
> --- a/net/core/xdp.c
> +++ b/net/core/xdp.c
> @@ -186,7 +186,6 @@ int __xdp_rxq_info_reg(struct xdp_rxq_info *xdp_rxq,
> xdp_rxq_info_init(xdp_rxq);
> xdp_rxq->dev = dev;
> xdp_rxq->queue_index = queue_index;
> - xdp_rxq->napi_id = napi_id;
> xdp_rxq->frag_size = frag_size;
>
> xdp_rxq->reg_state = REG_STATE_REGISTERED;
> diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
> index 3fa70286c846..89d2bef96469 100644
> --- a/net/xdp/xsk.c
> +++ b/net/xdp/xsk.c
> @@ -322,7 +322,6 @@ static int xsk_rcv_check(struct xdp_sock *xs, struct xdp_buff *xdp, u32 len)
> return -ENOSPC;
> }
>
> - sk_mark_napi_id_once_xdp(&xs->sk, xdp);
> return 0;
> }
>
> @@ -908,11 +907,8 @@ static int __xsk_sendmsg(struct socket *sock, struct msghdr *m, size_t total_len
> if (unlikely(!xs->tx))
> return -ENOBUFS;
>
> - if (sk_can_busy_loop(sk)) {
> - if (xs->zc)
> - __sk_mark_napi_id_once(sk, xsk_pool_get_napi_id(xs->pool));
> + if (sk_can_busy_loop(sk))
> sk_busy_loop(sk, 1); /* only support non-blocking sockets */
> - }
>
> if (xs->zc && xsk_no_wakeup(sk))
> return 0;
> @@ -1298,6 +1294,14 @@ static int xsk_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
> xs->queue_id = qid;
> xp_add_xsk(xs->pool, xs);
>
> + if (xs->zc && qid < dev->real_num_rx_queues) {
> + struct netdev_rx_queue *rxq;
> +
> + rxq = __netif_get_rx_queue(dev, qid);
> + if (rxq->napi)
> + __sk_mark_napi_id_once(sk, rxq->napi->napi_id);
> + }
> +
> out_unlock:
> if (err) {
> dev_put(dev);
> --
> 2.47.1
>
>
^ permalink raw reply [flat|nested] 6+ messages in thread* Re: [PATCH net] xsk: Bring back busy polling support
2025-01-09 15:22 ` Magnus Karlsson
@ 2025-01-09 16:43 ` Jakub Kicinski
2025-01-09 17:36 ` Joe Damato
1 sibling, 0 replies; 6+ messages in thread
From: Jakub Kicinski @ 2025-01-09 16:43 UTC (permalink / raw)
To: Magnus Karlsson
Cc: Stanislav Fomichev, netdev, davem, edumazet, pabeni, linux-kernel,
bpf, horms, ast, daniel, hawk, john.fastabend, bjorn,
magnus.karlsson, maciej.fijalkowski, jonathan.lemon, jdamato,
mkarsten
On Thu, 9 Jan 2025 16:22:16 +0100 Magnus Karlsson wrote:
> > Confirmed by running a busy-polling AF_XDP socket
> > (github.com/fomichev/xskrtt) on mlx5 and looking at BusyPollRxPackets
> > from /proc/net/netstat.
>
> Thanks Stanislav for finding and fixing this. As a bonus, the
> resulting code is much nicer too.
>
> I just took a look at the Intel drivers and some of our drivers have
> not been converted to use netif_queue_set_napi() yet. Just ice, e1000,
> and e1000e use it. But that is on us to fix.
Yup, on a quick look yesterday I think I spotted a few embedded
drivers (stmmac, tsnep, dpaa2), nfp and virtio_net which don't seem
to link the NAPI to queues. But I can't think of a better fix, and
updating those drivers to link NAPI to queues will be generally
beneficial, so in case someone else applies this:
Reviewed-by: Jakub Kicinski <kuba@kernel.org>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH net] xsk: Bring back busy polling support
2025-01-09 15:22 ` Magnus Karlsson
2025-01-09 16:43 ` Jakub Kicinski
@ 2025-01-09 17:36 ` Joe Damato
1 sibling, 0 replies; 6+ messages in thread
From: Joe Damato @ 2025-01-09 17:36 UTC (permalink / raw)
To: Magnus Karlsson
Cc: Stanislav Fomichev, netdev, davem, edumazet, kuba, pabeni,
linux-kernel, bpf, horms, ast, daniel, hawk, john.fastabend,
bjorn, magnus.karlsson, maciej.fijalkowski, jonathan.lemon,
mkarsten
On Thu, Jan 09, 2025 at 04:22:16PM +0100, Magnus Karlsson wrote:
> On Thu, 9 Jan 2025 at 01:35, Stanislav Fomichev <sdf@fomichev.me> wrote:
> >
> > Commit 86e25f40aa1e ("net: napi: Add napi_config") moved napi->napi_id
> > assignment to a later point in time (napi_hash_add_with_id). This breaks
> > __xdp_rxq_info_reg which copies napi_id at an earlier time and now
> > stores 0 napi_id. It also makes sk_mark_napi_id_once_xdp and
> > __sk_mark_napi_id_once useless because they now work against 0 napi_id.
> > Since sk_busy_loop requires valid napi_id to busy-poll on, there is no way
> > to busy-poll AF_XDP sockets anymore.
> >
> > Bring back the ability to busy-poll on XSK by resolving socket's napi_id
> > at bind time. This relies on relatively recent netif_queue_set_napi,
> > but (assume) at this point most popular drivers should have been converted.
> > This also removes per-tx/rx cycles which used to check and/or set
> > the napi_id value.
> >
> > Confirmed by running a busy-polling AF_XDP socket
> > (github.com/fomichev/xskrtt) on mlx5 and looking at BusyPollRxPackets
> > from /proc/net/netstat.
>
> Thanks Stanislav for finding and fixing this. As a bonus, the
> resulting code is much nicer too.
>
> I just took a look at the Intel drivers and some of our drivers have
> not been converted to use netif_queue_set_napi() yet. Just ice, e1000,
> and e1000e use it. But that is on us to fix.
igc also supports it ;)
I tried to add support to i40e some time ago, but ran into some
issues and didn't hear back, so I gave up on i40e.
In case my previous attempt is helpful for anyone at Intel, see [1].
[1]: https://lore.kernel.org/lkml/20240410043936.206169-1-jdamato@fastly.com/
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH net] xsk: Bring back busy polling support
2025-01-09 0:34 [PATCH net] xsk: Bring back busy polling support Stanislav Fomichev
2025-01-09 15:22 ` Magnus Karlsson
@ 2025-01-09 17:32 ` Joe Damato
2025-01-11 2:20 ` patchwork-bot+netdevbpf
2 siblings, 0 replies; 6+ messages in thread
From: Joe Damato @ 2025-01-09 17:32 UTC (permalink / raw)
To: Stanislav Fomichev
Cc: netdev, davem, edumazet, kuba, pabeni, linux-kernel, bpf, horms,
ast, daniel, hawk, john.fastabend, bjorn, magnus.karlsson,
maciej.fijalkowski, jonathan.lemon, mkarsten, alazar
On Wed, Jan 08, 2025 at 04:34:36PM -0800, Stanislav Fomichev wrote:
> Commit 86e25f40aa1e ("net: napi: Add napi_config") moved napi->napi_id
> assignment to a later point in time (napi_hash_add_with_id). This breaks
> __xdp_rxq_info_reg which copies napi_id at an earlier time and now
> stores 0 napi_id. It also makes sk_mark_napi_id_once_xdp and
> __sk_mark_napi_id_once useless because they now work against 0 napi_id.
> Since sk_busy_loop requires valid napi_id to busy-poll on, there is no way
> to busy-poll AF_XDP sockets anymore.
>
> Bring back the ability to busy-poll on XSK by resolving socket's napi_id
> at bind time. This relies on relatively recent netif_queue_set_napi,
> but (assume) at this point most popular drivers should have been converted.
> This also removes per-tx/rx cycles which used to check and/or set
> the napi_id value.
>
> Confirmed by running a busy-polling AF_XDP socket
> (github.com/fomichev/xskrtt) on mlx5 and looking at BusyPollRxPackets
> from /proc/net/netstat.
Thanks Stanislav for finding and fixing this.
I've CC'd Alex who reported a bug a couple weeks ago that might be
fixed by this change.
Alex: would you mind applying this patch to your tree to see if this
solves the issue you reported [1] ?
[1]: https://lore.kernel.org/netdev/DM8PR12MB5447837576EA58F490D6D4BFAD052@DM8PR12MB5447.namprd12.prod.outlook.com/
^ permalink raw reply [flat|nested] 6+ messages in thread* Re: [PATCH net] xsk: Bring back busy polling support
2025-01-09 0:34 [PATCH net] xsk: Bring back busy polling support Stanislav Fomichev
2025-01-09 15:22 ` Magnus Karlsson
2025-01-09 17:32 ` Joe Damato
@ 2025-01-11 2:20 ` patchwork-bot+netdevbpf
2 siblings, 0 replies; 6+ messages in thread
From: patchwork-bot+netdevbpf @ 2025-01-11 2:20 UTC (permalink / raw)
To: Stanislav Fomichev
Cc: netdev, davem, edumazet, kuba, pabeni, linux-kernel, bpf, horms,
ast, daniel, hawk, john.fastabend, bjorn, magnus.karlsson,
maciej.fijalkowski, jonathan.lemon, jdamato, mkarsten
Hello:
This patch was applied to netdev/net.git (main)
by Jakub Kicinski <kuba@kernel.org>:
On Wed, 8 Jan 2025 16:34:36 -0800 you wrote:
> Commit 86e25f40aa1e ("net: napi: Add napi_config") moved napi->napi_id
> assignment to a later point in time (napi_hash_add_with_id). This breaks
> __xdp_rxq_info_reg which copies napi_id at an earlier time and now
> stores 0 napi_id. It also makes sk_mark_napi_id_once_xdp and
> __sk_mark_napi_id_once useless because they now work against 0 napi_id.
> Since sk_busy_loop requires valid napi_id to busy-poll on, there is no way
> to busy-poll AF_XDP sockets anymore.
>
> [...]
Here is the summary with links:
- [net] xsk: Bring back busy polling support
https://git.kernel.org/netdev/net/c/5ef44b3cb43b
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2025-01-11 2:20 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-01-09 0:34 [PATCH net] xsk: Bring back busy polling support Stanislav Fomichev
2025-01-09 15:22 ` Magnus Karlsson
2025-01-09 16:43 ` Jakub Kicinski
2025-01-09 17:36 ` Joe Damato
2025-01-09 17:32 ` Joe Damato
2025-01-11 2:20 ` patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).