From: Jesper Dangaard Brouer <brouer@redhat.com>
To: Lorenzo Bianconi <lorenzo@kernel.org>
Cc: netdev@vger.kernel.org, bpf@vger.kernel.org,
lorenzo.bianconi@redhat.com, davem@davemloft.net,
kuba@kernel.org, ilias.apalodimas@linaro.org, brouer@redhat.com
Subject: Re: [PATCH net-next 1/4] net: xdp: introduce bulking for xdp tx return path
Date: Wed, 28 Oct 2020 12:34:19 +0100 [thread overview]
Message-ID: <20201028123419.27e1ac54@carbon> (raw)
In-Reply-To: <7495b5ac96b0fd2bf5ab79b12e01bf0ee0fff803.1603824486.git.lorenzo@kernel.org>
On Tue, 27 Oct 2020 20:04:07 +0100
Lorenzo Bianconi <lorenzo@kernel.org> wrote:
> Introduce bulking capability in xdp tx return path (XDP_REDIRECT).
> xdp_return_frame is usually run inside the driver NAPI tx completion
> loop so it is possible batch it.
> Current implementation considers only page_pool memory model.
> Convert mvneta driver to xdp_return_frame_bulk APIs.
>
> Suggested-by: Jesper Dangaard Brouer <brouer@redhat.com>
I think you/we have to explain better in this commit message, what the
idea/concept behind this bulk return is. Or even explain this as a
comment above "xdp_return_frame_bulk".
Maybe add/append text to commit below:
The bulk API introduced is a defer and flush API, that will defer
the return if the xdp_mem_allocator object is the same, identified
via the mem.id field (xdp_mem_info). Thus, the flush operation will
operate on the same xdp_mem_allocator object.
The bulk queue size of 16 is no coincident. This is connected to how
XDP redirect will bulk xmit (upto 16) frames. Thus, the idea is for the
API to find these boundaries (via mem.id match), which is optimal for
both the I-cache and D-cache for the memory allocator code and object.
The data structure (xdp_frame_bulk) used for deferred elements is
stored/allocated on the function call-stack, which allows lockfree
access.
> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> ---
[...]
> diff --git a/include/net/xdp.h b/include/net/xdp.h
> index 3814fb631d52..9567110845ef 100644
> --- a/include/net/xdp.h
> +++ b/include/net/xdp.h
> @@ -104,6 +104,12 @@ struct xdp_frame {
> struct net_device *dev_rx; /* used by cpumap */
> };
>
> +#define XDP_BULK_QUEUE_SIZE 16
Maybe "#define DEV_MAP_BULK_SIZE 16" should be def to
XDP_BULK_QUEUE_SIZE, to express the described connection.
> +struct xdp_frame_bulk {
> + void *q[XDP_BULK_QUEUE_SIZE];
> + int count;
> + void *xa;
Just a hunch (not benchmarked), but I think it will be more optimal to
place 'count' and '*xa' above the '*q' array. (It might not matter at
all, as we cannot control the start alignment, when this is on the
stack.)
> +};
[...]
> diff --git a/net/core/xdp.c b/net/core/xdp.c
> index 48aba933a5a8..93eabd789246 100644
> --- a/net/core/xdp.c
> +++ b/net/core/xdp.c
> @@ -380,6 +380,57 @@ void xdp_return_frame_rx_napi(struct xdp_frame *xdpf)
> }
> EXPORT_SYMBOL_GPL(xdp_return_frame_rx_napi);
>
> +void xdp_flush_frame_bulk(struct xdp_frame_bulk *bq)
> +{
> + struct xdp_mem_allocator *xa = bq->xa;
> + int i;
> +
> + if (unlikely(!xa))
> + return;
> +
> + for (i = 0; i < bq->count; i++) {
> + struct page *page = virt_to_head_page(bq->q[i]);
> +
> + page_pool_put_full_page(xa->page_pool, page, false);
> + }
> + bq->count = 0;
> +}
> +EXPORT_SYMBOL_GPL(xdp_flush_frame_bulk);
> +
Wondering if we should have a comment that explains the intent and idea
behind this function?
/* Defers return when frame belongs to same mem.id as previous frame */
> +void xdp_return_frame_bulk(struct xdp_frame *xdpf,
> + struct xdp_frame_bulk *bq)
> +{
> + struct xdp_mem_info *mem = &xdpf->mem;
> + struct xdp_mem_allocator *xa, *nxa;
> +
> + if (mem->type != MEM_TYPE_PAGE_POOL) {
> + __xdp_return(xdpf->data, &xdpf->mem, false);
> + return;
> + }
> +
> + rcu_read_lock();
> +
> + xa = bq->xa;
> + if (unlikely(!xa || mem->id != xa->mem.id)) {
> + nxa = rhashtable_lookup(mem_id_ht, &mem->id, mem_id_rht_params);
> + if (unlikely(!xa)) {
> + bq->count = 0;
> + bq->xa = nxa;
> + xa = nxa;
> + }
> + }
> +
> + if (mem->id != xa->mem.id || bq->count == XDP_BULK_QUEUE_SIZE)
> + xdp_flush_frame_bulk(bq);
> +
> + bq->q[bq->count++] = xdpf->data;
> + if (mem->id != xa->mem.id)
> + bq->xa = nxa;
> +
> + rcu_read_unlock();
> +}
> +EXPORT_SYMBOL_GPL(xdp_return_frame_bulk);
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
next prev parent reply other threads:[~2020-10-29 2:12 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-10-27 19:04 [PATCH net-next 0/4] xdp: introduce bulking for page_pool tx return path Lorenzo Bianconi
2020-10-27 19:04 ` [PATCH net-next 1/4] net: xdp: introduce bulking for xdp " Lorenzo Bianconi
2020-10-28 9:27 ` Ilias Apalodimas
2020-10-28 10:23 ` Lorenzo Bianconi
2020-10-28 10:59 ` Ilias Apalodimas
2020-10-28 11:28 ` Lorenzo Bianconi
2020-10-28 11:34 ` Jesper Dangaard Brouer [this message]
2020-10-28 11:43 ` Lorenzo Bianconi
2020-10-29 13:52 ` Jesper Dangaard Brouer
2020-10-29 14:02 ` Lorenzo Bianconi
2020-10-29 14:08 ` Ilias Apalodimas
2020-10-27 19:04 ` [PATCH net-next 2/4] net: page_pool: add bulk support for ptr_ring Lorenzo Bianconi
2020-10-29 7:08 ` Ilias Apalodimas
2020-10-29 10:36 ` Lorenzo Bianconi
2020-10-29 7:25 ` Ilias Apalodimas
2020-10-29 10:37 ` Lorenzo Bianconi
2020-10-29 10:13 ` Jesper Dangaard Brouer
2020-10-29 10:31 ` Lorenzo Bianconi
2020-10-29 13:40 ` Jesper Dangaard Brouer
2020-10-27 19:04 ` [PATCH net-next 3/4] net: mvpp2: add xdp tx return bulking support Lorenzo Bianconi
2020-10-27 19:04 ` [PATCH net-next 4/4] net: mlx5: " Lorenzo Bianconi
2020-10-29 11:42 ` Shay Agroskin
2020-10-29 13:15 ` Lorenzo Bianconi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20201028123419.27e1ac54@carbon \
--to=brouer@redhat.com \
--cc=bpf@vger.kernel.org \
--cc=davem@davemloft.net \
--cc=ilias.apalodimas@linaro.org \
--cc=kuba@kernel.org \
--cc=lorenzo.bianconi@redhat.com \
--cc=lorenzo@kernel.org \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).