From: Jesper Dangaard Brouer <brouer@redhat.com>
To: "Björn Töpel" <bjorn.topel@gmail.com>
Cc: brouer@redhat.com, ast@kernel.org, daniel@iogearbox.net,
netdev@vger.kernel.org, bpf@vger.kernel.org,
"Björn Töpel" <bjorn.topel@intel.com>,
magnus.karlsson@intel.com, davem@davemloft.net, kuba@kernel.org,
john.fastabend@gmail.com, intel-wired-lan@lists.osuosl.org
Subject: Re: [PATCH bpf-next 6/6] ixgbe, xsk: finish napi loop if AF_XDP Rx queue is full
Date: Fri, 4 Sep 2020 17:35:40 +0200 [thread overview]
Message-ID: <20200904173540.3a617eee@carbon> (raw)
In-Reply-To: <20200904135332.60259-7-bjorn.topel@gmail.com>
On Fri, 4 Sep 2020 15:53:31 +0200
Björn Töpel <bjorn.topel@gmail.com> wrote:
> From: Björn Töpel <bjorn.topel@intel.com>
>
> Make the AF_XDP zero-copy path aware that the reason for redirect
> failure was due to full Rx queue. If so, exit the napi loop as soon as
> possible (exit the softirq processing), so that the userspace AF_XDP
> process can hopefully empty the Rx queue. This mainly helps the "one
> core scenario", where the userland process and Rx softirq processing
> is on the same core.
>
> Note that the early exit can only be performed if the "need wakeup"
> feature is enabled, because otherwise there is no notification
> mechanism available from the kernel side.
>
> This requires that the driver starts using the newly introduced
> xdp_do_redirect_ext() and xsk_do_redirect_rx_full() functions.
>
> Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
> ---
> drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c | 23 ++++++++++++++------
> 1 file changed, 16 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
> index 3771857cf887..a4aebfd986b3 100644
> --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
> +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
> @@ -93,9 +93,11 @@ int ixgbe_xsk_pool_setup(struct ixgbe_adapter *adapter,
>
> static int ixgbe_run_xdp_zc(struct ixgbe_adapter *adapter,
> struct ixgbe_ring *rx_ring,
> - struct xdp_buff *xdp)
> + struct xdp_buff *xdp,
> + bool *early_exit)
> {
> int err, result = IXGBE_XDP_PASS;
> + enum bpf_map_type map_type;
> struct bpf_prog *xdp_prog;
> struct xdp_frame *xdpf;
> u32 act;
> @@ -116,8 +118,13 @@ static int ixgbe_run_xdp_zc(struct ixgbe_adapter *adapter,
> result = ixgbe_xmit_xdp_ring(adapter, xdpf);
> break;
> case XDP_REDIRECT:
> - err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog);
> - result = !err ? IXGBE_XDP_REDIR : IXGBE_XDP_CONSUMED;
> + err = xdp_do_redirect_ext(rx_ring->netdev, xdp, xdp_prog, &map_type);
> + if (err) {
> + *early_exit = xsk_do_redirect_rx_full(err, map_type);
Have you tried calling xdp_do_flush (that calls __xsk_map_flush()) and
(I guess) xsk_set_rx_need_wakeup() here, instead of stopping the loop?
(Or doing this in xsk core).
Looking at the code, the AF_XDP frames are "published" in the queue
rather late for AF_XDP. Maybe in an orthogonal optimization, have you
considered "publishing" the ring producer when e.g. the queue is
half-full?
> + result = IXGBE_XDP_CONSUMED;
> + } else {
> + result = IXGBE_XDP_REDIR;
> + }
> break;
> default:
> bpf_warn_invalid_xdp_action(act);
> @@ -235,8 +242,8 @@ int ixgbe_clean_rx_irq_zc(struct ixgbe_q_vector *q_vector,
> unsigned int total_rx_bytes = 0, total_rx_packets = 0;
> struct ixgbe_adapter *adapter = q_vector->adapter;
> u16 cleaned_count = ixgbe_desc_unused(rx_ring);
> + bool early_exit = false, failure = false;
> unsigned int xdp_res, xdp_xmit = 0;
> - bool failure = false;
> struct sk_buff *skb;
>
> while (likely(total_rx_packets < budget)) {
> @@ -288,7 +295,7 @@ int ixgbe_clean_rx_irq_zc(struct ixgbe_q_vector *q_vector,
>
> bi->xdp->data_end = bi->xdp->data + size;
> xsk_buff_dma_sync_for_cpu(bi->xdp, rx_ring->xsk_pool);
> - xdp_res = ixgbe_run_xdp_zc(adapter, rx_ring, bi->xdp);
> + xdp_res = ixgbe_run_xdp_zc(adapter, rx_ring, bi->xdp, &early_exit);
>
> if (xdp_res) {
> if (xdp_res & (IXGBE_XDP_TX | IXGBE_XDP_REDIR))
> @@ -302,6 +309,8 @@ int ixgbe_clean_rx_irq_zc(struct ixgbe_q_vector *q_vector,
>
> cleaned_count++;
> ixgbe_inc_ntc(rx_ring);
> + if (early_exit)
> + break;
> continue;
> }
>
> @@ -346,12 +355,12 @@ int ixgbe_clean_rx_irq_zc(struct ixgbe_q_vector *q_vector,
> q_vector->rx.total_bytes += total_rx_bytes;
>
> if (xsk_uses_need_wakeup(rx_ring->xsk_pool)) {
> - if (failure || rx_ring->next_to_clean == rx_ring->next_to_use)
> + if (early_exit || failure || rx_ring->next_to_clean == rx_ring->next_to_use)
> xsk_set_rx_need_wakeup(rx_ring->xsk_pool);
> else
> xsk_clear_rx_need_wakeup(rx_ring->xsk_pool);
>
> - return (int)total_rx_packets;
> + return early_exit ? 0 : (int)total_rx_packets;
> }
> return failure ? budget : (int)total_rx_packets;
> }
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
next prev parent reply other threads:[~2020-09-04 15:36 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-09-04 13:53 [PATCH bpf-next 0/6] xsk: exit NAPI loop when AF_XDP Rx ring is full Björn Töpel
2020-09-04 13:53 ` [PATCH bpf-next 1/6] xsk: improve xdp_do_redirect() error codes Björn Töpel
2020-09-04 13:53 ` [PATCH bpf-next 2/6] xdp: introduce xdp_do_redirect_ext() function Björn Töpel
2020-09-04 13:53 ` [PATCH bpf-next 3/6] xsk: introduce xsk_do_redirect_rx_full() helper Björn Töpel
2020-09-04 15:11 ` Jesper Dangaard Brouer
2020-09-04 15:39 ` Björn Töpel
2020-09-07 12:45 ` Jesper Dangaard Brouer
2020-09-04 13:53 ` [PATCH bpf-next 4/6] i40e, xsk: finish napi loop if AF_XDP Rx queue is full Björn Töpel
2020-09-04 13:53 ` [PATCH bpf-next 5/6] ice, " Björn Töpel
2020-09-04 13:53 ` [PATCH bpf-next 6/6] ixgbe, " Björn Töpel
2020-09-04 15:35 ` Jesper Dangaard Brouer [this message]
2020-09-04 15:54 ` Björn Töpel
2020-09-04 13:59 ` [PATCH bpf-next 0/6] xsk: exit NAPI loop when AF_XDP Rx ring " Björn Töpel
2020-09-08 10:32 ` Maxim Mikityanskiy
2020-09-08 11:37 ` Magnus Karlsson
2020-09-08 12:21 ` Björn Töpel
2020-09-09 15:37 ` Jesper Dangaard Brouer
2020-09-04 14:27 ` Jesper Dangaard Brouer
2020-09-04 14:32 ` Björn Töpel
2020-09-04 23:58 ` Jakub Kicinski
2020-09-07 13:37 ` Björn Töpel
2020-09-07 18:40 ` Jakub Kicinski
2020-09-08 6:58 ` Björn Töpel
2020-09-08 17:24 ` Jakub Kicinski
2020-09-08 18:28 ` Björn Töpel
2020-09-08 18:34 ` Jakub Kicinski
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200904173540.3a617eee@carbon \
--to=brouer@redhat.com \
--cc=ast@kernel.org \
--cc=bjorn.topel@gmail.com \
--cc=bjorn.topel@intel.com \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=intel-wired-lan@lists.osuosl.org \
--cc=john.fastabend@gmail.com \
--cc=kuba@kernel.org \
--cc=magnus.karlsson@intel.com \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).