netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jesper Dangaard Brouer <brouer@redhat.com>
To: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
Cc: grygorii.strashko@ti.com, hawk@kernel.org, davem@davemloft.net,
	ast@kernel.org, linux-kernel@vger.kernel.org,
	linux-omap@vger.kernel.org, xdp-newbies@vger.kernel.org,
	ilias.apalodimas@linaro.org, netdev@vger.kernel.org,
	daniel@iogearbox.net, jakub.kicinski@netronome.com,
	john.fastabend@gmail.com, brouer@redhat.com
Subject: Re: [PATCH v5 net-next 6/6] net: ethernet: ti: cpsw: add XDP support
Date: Wed, 3 Jul 2019 09:26:03 +0200	[thread overview]
Message-ID: <20190703092603.66f36914@carbon> (raw)
In-Reply-To: <20190630172348.5692-7-ivan.khoronzhuk@linaro.org>


On Sun, 30 Jun 2019 20:23:48 +0300 Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> wrote:

> Add XDP support based on rx page_pool allocator, one frame per page.
> Page pool allocator is used with assumption that only one rx_handler
> is running simultaneously. DMA map/unmap is reused from page pool
> despite there is no need to map whole page.
> 
> Due to specific of cpsw, the same TX/RX handler can be used by 2
> network devices, so special fields in buffer are added to identify
> an interface the frame is destined to. Thus XDP works for both
> interfaces, that allows to test xdp redirect between two interfaces
> easily. Aslo, each rx queue have own page pools, but common for both
> netdevs.

Looking at the details what happen when a single RX-queue can receive
into multiple net_device'es.  I realize that this driver will
violate/kill some of the "hidden"/implicit RX-bulking that the
XDP_REDIRECT code depend on for performance.

Specifically, it violate this assumption:
 https://github.com/torvalds/linux/blob/v5.2-rc7/kernel/bpf/devmap.c#L324-L329

	/* Ingress dev_rx will be the same for all xdp_frame's in
	 * bulk_queue, because bq stored per-CPU and must be flushed
	 * from net_device drivers NAPI func end.
	 */
	if (!bq->dev_rx)
		bq->dev_rx = dev_rx;

This drivers "NAPI func end", can have received into multiple
net_devices, before it's NAPI cycle ends.  Thus, violating this code
assumption.

Knowing all xdp_frame's in the bulk queue is from the same net_device,
can be used to further optimize XDP.  E.g. the dev->netdev_ops->ndo_xdp_xmit()
call don't take fully advantage of this, yet.  If we merge this driver,
it will block optimizations in this area.

NACK

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

  parent reply	other threads:[~2019-07-03  7:26 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-30 17:23 [PATCH v5 net-next 0/6] net: ethernet: ti: cpsw: Add XDP support Ivan Khoronzhuk
2019-06-30 17:23 ` [PATCH v5 net-next 1/6] xdp: allow same allocator usage Ivan Khoronzhuk
2019-07-01 11:40   ` Jesper Dangaard Brouer
2019-07-02 10:27     ` Ivan Khoronzhuk
2019-07-02 14:46       ` Jesper Dangaard Brouer
2019-07-02 14:53         ` Ivan Khoronzhuk
2019-06-30 17:23 ` [PATCH v5 net-next 2/6] net: ethernet: ti: davinci_cpdma: add dma mapped submit Ivan Khoronzhuk
2019-06-30 17:23 ` [PATCH v5 net-next 3/6] net: ethernet: ti: davinci_cpdma: return handler status Ivan Khoronzhuk
2019-06-30 17:23 ` [PATCH v5 net-next 4/6] net: ethernet: ti: davinci_cpdma: allow desc split while down Ivan Khoronzhuk
2019-06-30 17:23 ` [PATCH v5 net-next 5/6] net: ethernet: ti: cpsw_ethtool: allow res " Ivan Khoronzhuk
2019-06-30 17:23 ` [PATCH v5 net-next 6/6] net: ethernet: ti: cpsw: add XDP support Ivan Khoronzhuk
2019-07-01 16:19   ` Jesper Dangaard Brouer
2019-07-01 18:09     ` Ilias Apalodimas
2019-07-02 11:37     ` Ivan Khoronzhuk
2019-07-02 13:39       ` Jesper Dangaard Brouer
2019-07-02 14:24         ` Ivan Khoronzhuk
2019-07-02 14:31         ` [PATCH] net: core: page_pool: add user refcnt and reintroduce page_pool_destroy Jesper Dangaard Brouer
2019-07-02 14:44           ` Ivan Khoronzhuk
2019-07-02 14:52             ` Jesper Dangaard Brouer
2019-07-02 14:56               ` Ivan Khoronzhuk
2019-07-02 15:10                 ` Jesper Dangaard Brouer
2019-07-02 15:21                   ` Ivan Khoronzhuk
2019-07-02 18:29                     ` Jesper Dangaard Brouer
2019-07-02 18:58                       ` Ivan Khoronzhuk
2019-07-02 20:28                         ` Ivan Khoronzhuk
2019-07-02 21:02                         ` Jesper Dangaard Brouer
2019-07-02 21:15                           ` Ilias Apalodimas
2019-07-02 21:41                             ` Ivan Khoronzhuk
2019-07-03  7:26   ` Jesper Dangaard Brouer [this message]
2019-07-03  7:38     ` [PATCH v5 net-next 6/6] net: ethernet: ti: cpsw: add XDP support Ivan Khoronzhuk

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190703092603.66f36914@carbon \
    --to=brouer@redhat.com \
    --cc=ast@kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=grygorii.strashko@ti.com \
    --cc=hawk@kernel.org \
    --cc=ilias.apalodimas@linaro.org \
    --cc=ivan.khoronzhuk@linaro.org \
    --cc=jakub.kicinski@netronome.com \
    --cc=john.fastabend@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-omap@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=xdp-newbies@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).