netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ilias Apalodimas <ilias.apalodimas@linaro.org>
To: Jesper Dangaard Brouer <brouer@redhat.com>,
	grygorii.strashko@ti.com, hawk@kernel.org, davem@davemloft.net,
	ast@kernel.org, linux-kernel@vger.kernel.org,
	linux-omap@vger.kernel.org, xdp-newbies@vger.kernel.org,
	netdev@vger.kernel.org, daniel@iogearbox.net,
	jakub.kicinski@netronome.com, john.fastabend@gmail.com
Subject: Re: [PATCH v6 net-next 5/5] net: ethernet: ti: cpsw: add XDP support
Date: Thu, 4 Jul 2019 12:49:38 +0300	[thread overview]
Message-ID: <20190704094938.GA27382@apalos> (raw)
In-Reply-To: <20190704094329.GA19839@khorivan>

On Thu, Jul 04, 2019 at 12:43:30PM +0300, Ivan Khoronzhuk wrote:
> On Thu, Jul 04, 2019 at 12:39:02PM +0300, Ilias Apalodimas wrote:
> >On Thu, Jul 04, 2019 at 11:19:39AM +0200, Jesper Dangaard Brouer wrote:
> >>On Wed,  3 Jul 2019 13:19:03 +0300
> >>Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> wrote:
> >>
> >>> Add XDP support based on rx page_pool allocator, one frame per page.
> >>> Page pool allocator is used with assumption that only one rx_handler
> >>> is running simultaneously. DMA map/unmap is reused from page pool
> >>> despite there is no need to map whole page.
> >>>
> >>> Due to specific of cpsw, the same TX/RX handler can be used by 2
> >>> network devices, so special fields in buffer are added to identify
> >>> an interface the frame is destined to. Thus XDP works for both
> >>> interfaces, that allows to test xdp redirect between two interfaces
> >>> easily. Aslo, each rx queue have own page pools, but common for both
> >>> netdevs.
> >>>
> >>> XDP prog is common for all channels till appropriate changes are added
> >>> in XDP infrastructure. Also, once page_pool recycling becomes part of
> >>> skb netstack some simplifications can be added, like removing
> >>> page_pool_release_page() before skb receive.
> >>>
> >>> In order to keep rx_dev while redirect, that can be somehow used in
> >>> future, do flush in rx_handler, that allows to keep rx dev the same
> >>> while reidrect. It allows to conform with tracing rx_dev pointed
> >>> by Jesper.
> >>
> >>So, you simply call xdp_do_flush_map() after each xdp_do_redirect().
> >>It will kill RX-bulk and performance, but I guess it will work.
> >>
> >>I guess, we can optimized it later, by e.g. in function calling
> >>cpsw_run_xdp() have a variable that detect if net_device changed
> >>(priv->ndev) and then call xdp_do_flush_map() when needed.
> >I tried something similar on the netsec driver on my initial development.
> >On the 1gbit speed NICs i saw no difference between flushing per packet vs
> >flushing on the end of the NAPI handler.
> >The latter is obviously better but since the performance impact is negligible on
> >this particular NIC, i don't think this should be a blocker.
> >Please add a clear comment on this and why you do that on this driver,
> >so people won't go ahead and copy/paste this approach
> Sry, but I did this already, is it not enouph?
The flush *must* happen there to avoid messing the following layers. The comment
says something like 'just to be sure'. It's not something that might break, it's
something that *will* break the code and i don't think that's clear with the
current comment.

So i'd prefer something like 
'We must flush here, per packet, instead of doing it in bulk at the end of
the napi handler.The RX devices on this particular hardware is sharing a
common queue, so the incoming device might change per packet'


Thanks
/Ilias
> 
> -- 
> Regards,
> Ivan Khoronzhuk

  reply	other threads:[~2019-07-04  9:49 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-03 10:18 [PATCH v6 net-next 0/5] net: ethernet: ti: cpsw: Add XDP support Ivan Khoronzhuk
2019-07-03 10:18 ` [PATCH v6 net-next 1/5] xdp: allow same allocator usage Ivan Khoronzhuk
2019-07-03 17:40   ` Jesper Dangaard Brouer
2019-07-04 10:22     ` Ivan Khoronzhuk
2019-07-04 12:41       ` Jesper Dangaard Brouer
2019-07-04 17:11         ` Ivan Khoronzhuk
2019-07-03 10:19 ` [PATCH v6 net-next 2/5] net: ethernet: ti: davinci_cpdma: add dma mapped submit Ivan Khoronzhuk
2019-07-05 19:32   ` kbuild test robot
2019-07-03 10:19 ` [PATCH v6 net-next 3/5] net: ethernet: ti: davinci_cpdma: allow desc split while down Ivan Khoronzhuk
2019-07-03 10:19 ` [PATCH v6 net-next 4/5] net: ethernet: ti: cpsw_ethtool: allow res " Ivan Khoronzhuk
2019-07-03 10:19 ` [PATCH v6 net-next 5/5] net: ethernet: ti: cpsw: add XDP support Ivan Khoronzhuk
2019-07-04  9:19   ` Jesper Dangaard Brouer
2019-07-04  9:39     ` Ilias Apalodimas
2019-07-04  9:43       ` Ivan Khoronzhuk
2019-07-04  9:49         ` Ilias Apalodimas [this message]
2019-07-04  9:53           ` Ivan Khoronzhuk
2019-07-04  9:45     ` Ivan Khoronzhuk

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190704094938.GA27382@apalos \
    --to=ilias.apalodimas@linaro.org \
    --cc=ast@kernel.org \
    --cc=brouer@redhat.com \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=grygorii.strashko@ti.com \
    --cc=hawk@kernel.org \
    --cc=jakub.kicinski@netronome.com \
    --cc=john.fastabend@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-omap@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=xdp-newbies@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).