BPF List
 help / color / mirror / Atom feed
* [PATCH v2 bpf-next] xsk: mark napi_id on sendmsg()
@ 2022-07-07 13:08 Maciej Fijalkowski
  2022-07-14 12:39 ` Magnus Karlsson
  2022-07-14 20:50 ` patchwork-bot+netdevbpf
  0 siblings, 2 replies; 3+ messages in thread
From: Maciej Fijalkowski @ 2022-07-07 13:08 UTC (permalink / raw)
  To: bpf, ast, daniel, andrii
  Cc: netdev, magnus.karlsson, bjorn, kuba, Maciej Fijalkowski

When application runs in busy poll mode and does not receive a single
packet but only sends them, it is currently
impossible to get into napi_busy_loop() as napi_id is only marked on Rx
side in xsk_rcv_check(). In there, napi_id is being taken from
xdp_rxq_info carried by xdp_buff. From Tx perspective, we do not have
access to it. What we have handy is the xsk pool.

Xsk pool works on a pool of internal xdp_buff wrappers called
xdp_buff_xsk. AF_XDP ZC enabled drivers call xp_set_rxq_info() so each
of xdp_buff_xsk has a valid pointer to xdp_rxq_info of underlying queue.
Therefore, on Tx side, napi_id can be pulled from
xs->pool->heads[0].xdp.rxq->napi_id. Hide this pointer chase under
helper function, xsk_pool_get_napi_id().

Do this only for sockets working in ZC mode as otherwise rxq pointers
would not be initialized.

Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
---

v2:
* target bpf-next instead of bpf and don't treat it as fix (Bjorn)
* hide pointer chasing under helper function (Bjorn)

 include/net/xdp_sock_drv.h | 14 ++++++++++++++
 net/xdp/xsk.c              |  5 ++++-
 2 files changed, 18 insertions(+), 1 deletion(-)

diff --git a/include/net/xdp_sock_drv.h b/include/net/xdp_sock_drv.h
index 4aa031849668..4277b0dcee05 100644
--- a/include/net/xdp_sock_drv.h
+++ b/include/net/xdp_sock_drv.h
@@ -44,6 +44,15 @@ static inline void xsk_pool_set_rxq_info(struct xsk_buff_pool *pool,
 	xp_set_rxq_info(pool, rxq);
 }
 
+static inline unsigned int xsk_pool_get_napi_id(struct xsk_buff_pool *pool)
+{
+#ifdef CONFIG_NET_RX_BUSY_POLL
+	return pool->heads[0].xdp.rxq->napi_id;
+#else
+	return 0;
+#endif
+}
+
 static inline void xsk_pool_dma_unmap(struct xsk_buff_pool *pool,
 				      unsigned long attrs)
 {
@@ -198,6 +207,11 @@ static inline void xsk_pool_set_rxq_info(struct xsk_buff_pool *pool,
 {
 }
 
+static inline unsigned int xsk_pool_get_napi_id(struct xsk_buff_pool *pool)
+{
+	return 0;
+}
+
 static inline void xsk_pool_dma_unmap(struct xsk_buff_pool *pool,
 				      unsigned long attrs)
 {
diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
index 19ac872a6624..86a97da7e50b 100644
--- a/net/xdp/xsk.c
+++ b/net/xdp/xsk.c
@@ -637,8 +637,11 @@ static int __xsk_sendmsg(struct socket *sock, struct msghdr *m, size_t total_len
 	if (unlikely(need_wait))
 		return -EOPNOTSUPP;
 
-	if (sk_can_busy_loop(sk))
+	if (sk_can_busy_loop(sk)) {
+		if (xs->zc)
+			__sk_mark_napi_id_once(sk, xsk_pool_get_napi_id(xs->pool));
 		sk_busy_loop(sk, 1); /* only support non-blocking sockets */
+	}
 
 	if (xs->zc && xsk_no_wakeup(sk))
 		return 0;
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH v2 bpf-next] xsk: mark napi_id on sendmsg()
  2022-07-07 13:08 [PATCH v2 bpf-next] xsk: mark napi_id on sendmsg() Maciej Fijalkowski
@ 2022-07-14 12:39 ` Magnus Karlsson
  2022-07-14 20:50 ` patchwork-bot+netdevbpf
  1 sibling, 0 replies; 3+ messages in thread
From: Magnus Karlsson @ 2022-07-14 12:39 UTC (permalink / raw)
  To: Maciej Fijalkowski
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Network Development, Karlsson, Magnus, Björn Töpel,
	Jakub Kicinski

On Thu, Jul 7, 2022 at 3:20 PM Maciej Fijalkowski
<maciej.fijalkowski@intel.com> wrote:
>
> When application runs in busy poll mode and does not receive a single
> packet but only sends them, it is currently
> impossible to get into napi_busy_loop() as napi_id is only marked on Rx
> side in xsk_rcv_check(). In there, napi_id is being taken from
> xdp_rxq_info carried by xdp_buff. From Tx perspective, we do not have
> access to it. What we have handy is the xsk pool.
>
> Xsk pool works on a pool of internal xdp_buff wrappers called
> xdp_buff_xsk. AF_XDP ZC enabled drivers call xp_set_rxq_info() so each
> of xdp_buff_xsk has a valid pointer to xdp_rxq_info of underlying queue.
> Therefore, on Tx side, napi_id can be pulled from
> xs->pool->heads[0].xdp.rxq->napi_id. Hide this pointer chase under
> helper function, xsk_pool_get_napi_id().
>
> Do this only for sockets working in ZC mode as otherwise rxq pointers
> would not be initialized.

Thanks Maciej.

Acked-by: Magnus Karlsson <magnus.karlsson@intel.com>

> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> ---
>
> v2:
> * target bpf-next instead of bpf and don't treat it as fix (Bjorn)
> * hide pointer chasing under helper function (Bjorn)
>
>  include/net/xdp_sock_drv.h | 14 ++++++++++++++
>  net/xdp/xsk.c              |  5 ++++-
>  2 files changed, 18 insertions(+), 1 deletion(-)
>
> diff --git a/include/net/xdp_sock_drv.h b/include/net/xdp_sock_drv.h
> index 4aa031849668..4277b0dcee05 100644
> --- a/include/net/xdp_sock_drv.h
> +++ b/include/net/xdp_sock_drv.h
> @@ -44,6 +44,15 @@ static inline void xsk_pool_set_rxq_info(struct xsk_buff_pool *pool,
>         xp_set_rxq_info(pool, rxq);
>  }
>
> +static inline unsigned int xsk_pool_get_napi_id(struct xsk_buff_pool *pool)
> +{
> +#ifdef CONFIG_NET_RX_BUSY_POLL
> +       return pool->heads[0].xdp.rxq->napi_id;
> +#else
> +       return 0;
> +#endif
> +}
> +
>  static inline void xsk_pool_dma_unmap(struct xsk_buff_pool *pool,
>                                       unsigned long attrs)
>  {
> @@ -198,6 +207,11 @@ static inline void xsk_pool_set_rxq_info(struct xsk_buff_pool *pool,
>  {
>  }
>
> +static inline unsigned int xsk_pool_get_napi_id(struct xsk_buff_pool *pool)
> +{
> +       return 0;
> +}
> +
>  static inline void xsk_pool_dma_unmap(struct xsk_buff_pool *pool,
>                                       unsigned long attrs)
>  {
> diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
> index 19ac872a6624..86a97da7e50b 100644
> --- a/net/xdp/xsk.c
> +++ b/net/xdp/xsk.c
> @@ -637,8 +637,11 @@ static int __xsk_sendmsg(struct socket *sock, struct msghdr *m, size_t total_len
>         if (unlikely(need_wait))
>                 return -EOPNOTSUPP;
>
> -       if (sk_can_busy_loop(sk))
> +       if (sk_can_busy_loop(sk)) {
> +               if (xs->zc)
> +                       __sk_mark_napi_id_once(sk, xsk_pool_get_napi_id(xs->pool));
>                 sk_busy_loop(sk, 1); /* only support non-blocking sockets */
> +       }
>
>         if (xs->zc && xsk_no_wakeup(sk))
>                 return 0;
> --
> 2.27.0
>

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH v2 bpf-next] xsk: mark napi_id on sendmsg()
  2022-07-07 13:08 [PATCH v2 bpf-next] xsk: mark napi_id on sendmsg() Maciej Fijalkowski
  2022-07-14 12:39 ` Magnus Karlsson
@ 2022-07-14 20:50 ` patchwork-bot+netdevbpf
  1 sibling, 0 replies; 3+ messages in thread
From: patchwork-bot+netdevbpf @ 2022-07-14 20:50 UTC (permalink / raw)
  To: Maciej Fijalkowski
  Cc: bpf, ast, daniel, andrii, netdev, magnus.karlsson, bjorn, kuba

Hello:

This patch was applied to bpf/bpf-next.git (master)
by Daniel Borkmann <daniel@iogearbox.net>:

On Thu,  7 Jul 2022 15:08:42 +0200 you wrote:
> When application runs in busy poll mode and does not receive a single
> packet but only sends them, it is currently
> impossible to get into napi_busy_loop() as napi_id is only marked on Rx
> side in xsk_rcv_check(). In there, napi_id is being taken from
> xdp_rxq_info carried by xdp_buff. From Tx perspective, we do not have
> access to it. What we have handy is the xsk pool.
> 
> [...]

Here is the summary with links:
  - [v2,bpf-next] xsk: mark napi_id on sendmsg()
    https://git.kernel.org/bpf/bpf-next/c/ca2e1a627035

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2022-07-14 20:50 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-07-07 13:08 [PATCH v2 bpf-next] xsk: mark napi_id on sendmsg() Maciej Fijalkowski
2022-07-14 12:39 ` Magnus Karlsson
2022-07-14 20:50 ` patchwork-bot+netdevbpf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox