From: Jesper Dangaard Brouer <brouer@redhat.com>
To: Lorenzo Bianconi <lorenzo@kernel.org>
Cc: netdev@vger.kernel.org, lorenzo.bianconi@redhat.com,
davem@davemloft.net, thomas.petazzoni@bootlin.com,
ilias.apalodimas@linaro.org, matteo.croce@redhat.com,
mw@semihalf.com, brouer@redhat.com,
"Björn Töpel" <bjorn.topel@intel.com>
Subject: Re: [PATCH v2 net-next 5/8] net: mvneta: add basic XDP support
Date: Thu, 10 Oct 2019 10:50:40 +0200 [thread overview]
Message-ID: <20191010105040.23e5e86f@carbon> (raw)
In-Reply-To: <0f471851967abb980d34104b64fea013b0dced7c.1570662004.git.lorenzo@kernel.org>
On Thu, 10 Oct 2019 01:18:35 +0200
Lorenzo Bianconi <lorenzo@kernel.org> wrote:
> Add basic XDP support to mvneta driver for devices that rely on software
> buffer management. Currently supported verdicts are:
> - XDP_DROP
> - XDP_PASS
> - XDP_REDIRECT
> - XDP_ABORTED
>
> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> ---
> drivers/net/ethernet/marvell/mvneta.c | 144 ++++++++++++++++++++++++--
> 1 file changed, 135 insertions(+), 9 deletions(-)
>
> diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
> index ba4aa9bbc798..e2795dddbcaf 100644
> --- a/drivers/net/ethernet/marvell/mvneta.c
> +++ b/drivers/net/ethernet/marvell/mvneta.c
[...]
> @@ -1950,16 +1960,60 @@ int mvneta_rx_refill_queue(struct mvneta_port *pp, struct mvneta_rx_queue *rxq)
> return i;
> }
>
> +static int
> +mvneta_run_xdp(struct mvneta_port *pp, struct bpf_prog *prog,
> + struct xdp_buff *xdp)
> +{
> + u32 ret, act = bpf_prog_run_xdp(prog, xdp);
> +
> + switch (act) {
> + case XDP_PASS:
> + ret = MVNETA_XDP_PASS;
> + break;
> + case XDP_REDIRECT: {
> + int err;
> +
> + err = xdp_do_redirect(pp->dev, xdp, prog);
> + if (err) {
> + ret = MVNETA_XDP_CONSUMED;
> + xdp_return_buff(xdp);
> + } else {
> + ret = MVNETA_XDP_REDIR;
> + }
> + break;
> + }
> + default:
> + bpf_warn_invalid_xdp_action(act);
> + /* fall through */
> + case XDP_ABORTED:
> + trace_xdp_exception(pp->dev, prog, act);
> + /* fall through */
> + case XDP_DROP:
> + ret = MVNETA_XDP_CONSUMED;
> + xdp_return_buff(xdp);
Using xdp_return_buff() here is actually not optimal for performance.
I can see that others socionext/netsec.c and AF_XDP also use this
xdp_return_buff().
I do think code wise it looks a lot nice to use xdp_return_buff(), so
maybe we should optimize xdp_return_buff(), instead of using
page_pool_recycle_direct() here? (That would also help AF_XDP ?)
The problem with xdp_return_buff() is that it does a "full" lookup from
the mem.id (xdp_buff->xdp_rxq_info->mem.id) to find the "allocator"
pointer in this case the page_pool pointer. Here in the driver we
already have access to the stable allocator page_pool pointer via
struct mvneta_rx_queue *rxq->page_pool.
> + break;
> + }
> +
> + return ret;
> +}
> +
> static int
> mvneta_swbm_rx_frame(struct mvneta_port *pp,
> struct mvneta_rx_desc *rx_desc,
> struct mvneta_rx_queue *rxq,
> - struct page *page)
> + struct bpf_prog *xdp_prog,
> + struct page *page, u32 *xdp_ret)
> {
> unsigned char *data = page_address(page);
> int data_len = -MVNETA_MH_SIZE, len;
> struct net_device *dev = pp->dev;
> enum dma_data_direction dma_dir;
> + struct xdp_buff xdp = {
> + .data_hard_start = data,
> + .data = data + MVNETA_SKB_HEADROOM + MVNETA_MH_SIZE,
> + .rxq = &rxq->xdp_rxq,
> + };
Creating the struct xdp_buff (on call-stack) this way is not optimal
for performance (IHMO it looks nicer code-wise, but too bad).
This kind of initialization of only some of the members, cause GCC to
zero out other members (I observed this on Intel, which use an
expensive rep-sto operation). Thus, this cause extra unnecessary memory
writes.
A further optimization, is that you can avoid re-assigning:
rxq = &rxq->xdp_rxq
for each frame, as this actually stays the same for all the frames in
this NAPI cycle. By instead allocating the xdp_buff on the callers
stack, and parsing in xdp_buff as a pointer.
> + xdp_set_data_meta_invalid(&xdp);
>
> if (MVNETA_SKB_SIZE(rx_desc->data_size) > PAGE_SIZE) {
> len = MVNETA_MAX_RX_BUF_SIZE;
> @@ -1968,13 +2022,27 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp,
> len = rx_desc->data_size;
> data_len += len - ETH_FCS_LEN;
> }
> + xdp.data_end = xdp.data + data_len;
>
> dma_dir = page_pool_get_dma_dir(rxq->page_pool);
> dma_sync_single_range_for_cpu(dev->dev.parent,
> rx_desc->buf_phys_addr, 0,
> len, dma_dir);
>
> - rxq->skb = build_skb(data, PAGE_SIZE);
> + if (xdp_prog) {
> + u32 ret;
> +
> + ret = mvneta_run_xdp(pp, xdp_prog, &xdp);
> + if (ret != MVNETA_XDP_PASS) {
> + mvneta_update_stats(pp, 1, xdp.data_end - xdp.data,
> + false);
> + rx_desc->buf_phys_addr = 0;
> + *xdp_ret |= ret;
> + return ret;
> + }
> + }
> +
> + rxq->skb = build_skb(xdp.data_hard_start, PAGE_SIZE);
> if (unlikely(!rxq->skb)) {
> netdev_err(dev,
> "Can't allocate skb on queue %d\n",
[...]
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
next prev parent reply other threads:[~2019-10-10 8:54 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-10-09 23:18 [PATCH v2 net-next 0/8] add XDP support to mvneta driver Lorenzo Bianconi
2019-10-09 23:18 ` [PATCH v2 net-next 1/8] net: mvneta: introduce mvneta_update_stats routine Lorenzo Bianconi
2019-10-09 23:18 ` [PATCH v2 net-next 2/8] net: mvneta: introduce page pool API for sw buffer manager Lorenzo Bianconi
2019-10-09 23:18 ` [PATCH v2 net-next 3/8] net: mvneta: rely on build_skb in mvneta_rx_swbm poll routine Lorenzo Bianconi
2019-10-10 7:16 ` Ilias Apalodimas
2019-10-10 9:58 ` Lorenzo Bianconi
2019-10-09 23:18 ` [PATCH v2 net-next 4/8] net: mvneta: sync dma buffers before refilling hw queues Lorenzo Bianconi
2019-10-10 7:08 ` Jesper Dangaard Brouer
2019-10-10 7:21 ` Ilias Apalodimas
2019-10-10 9:18 ` Lorenzo Bianconi
2019-10-10 9:20 ` Lorenzo Bianconi
2019-10-09 23:18 ` [PATCH v2 net-next 5/8] net: mvneta: add basic XDP support Lorenzo Bianconi
2019-10-10 8:50 ` Jesper Dangaard Brouer [this message]
2019-10-10 9:57 ` Lorenzo Bianconi
2019-10-09 23:18 ` [PATCH v2 net-next 6/8] net: mvneta: move header prefetch in mvneta_swbm_rx_frame Lorenzo Bianconi
2019-10-09 23:18 ` [PATCH v2 net-next 7/8] net: mvneta: make tx buffer array agnostic Lorenzo Bianconi
2019-10-09 23:18 ` [PATCH v2 net-next 8/8] net: mvneta: add XDP_TX support Lorenzo Bianconi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20191010105040.23e5e86f@carbon \
--to=brouer@redhat.com \
--cc=bjorn.topel@intel.com \
--cc=davem@davemloft.net \
--cc=ilias.apalodimas@linaro.org \
--cc=lorenzo.bianconi@redhat.com \
--cc=lorenzo@kernel.org \
--cc=matteo.croce@redhat.com \
--cc=mw@semihalf.com \
--cc=netdev@vger.kernel.org \
--cc=thomas.petazzoni@bootlin.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).