From: Jesper Dangaard Brouer <brouer@redhat.com>
To: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
Cc: netdev@vger.kernel.org,
Ilias Apalodimas <ilias.apalodimas@linaro.org>,
grygorii.strashko@ti.com, jakub.kicinski@netronome.com,
daniel@iogearbox.net, john.fastabend@gmail.com, ast@kernel.org,
linux-kernel@vger.kernel.org, linux-omap@vger.kernel.org,
brouer@redhat.com
Subject: Re: [PATCH] net: core: page_pool: add user refcnt and reintroduce page_pool_destroy
Date: Tue, 2 Jul 2019 23:02:41 +0200 [thread overview]
Message-ID: <20190702230241.3be6d787@carbon> (raw)
In-Reply-To: <20190702185839.GH4510@khorivan>
On Tue, 2 Jul 2019 21:58:40 +0300
Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> wrote:
> On Tue, Jul 02, 2019 at 08:29:07PM +0200, Jesper Dangaard Brouer wrote:
> >On Tue, 2 Jul 2019 18:21:13 +0300
> >Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> wrote:
> >
> >> On Tue, Jul 02, 2019 at 05:10:29PM +0200, Jesper Dangaard Brouer wrote:
> >> >On Tue, 2 Jul 2019 17:56:13 +0300
> >> >Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> wrote:
> >> >
> >> >> On Tue, Jul 02, 2019 at 04:52:30PM +0200, Jesper Dangaard Brouer wrote:
> >> >> >On Tue, 2 Jul 2019 17:44:27 +0300
> >> >> >Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> wrote:
> >> >> >
> >> >> >> On Tue, Jul 02, 2019 at 04:31:39PM +0200, Jesper Dangaard Brouer wrote:
> >> >> >> >From: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
> >> >> >> >
> >> >> >> >Jesper recently removed page_pool_destroy() (from driver invocation) and
> >> >> >> >moved shutdown and free of page_pool into xdp_rxq_info_unreg(), in-order to
> >> >> >> >handle in-flight packets/pages. This created an asymmetry in drivers
> >> >> >> >create/destroy pairs.
> >> >> >> >
> >> >> >> >This patch add page_pool user refcnt and reintroduce page_pool_destroy.
> >> >> >> >This serves two purposes, (1) simplify drivers error handling as driver now
> >> >> >> >drivers always calls page_pool_destroy() and don't need to track if
> >> >> >> >xdp_rxq_info_reg_mem_model() was unsuccessful. (2) allow special cases
> >> >> >> >where a single RX-queue (with a single page_pool) provides packets for two
> >> >> >> >net_device'es, and thus needs to register the same page_pool twice with two
> >> >> >> >xdp_rxq_info structures.
> >> >> >>
> >> >> >> As I tend to use xdp level patch there is no more reason to mention (2) case
> >> >> >> here. XDP patch serves it better and can prevent not only obj deletion but also
> >> >> >> pool flush, so, this one patch I could better leave only for (1) case.
> >> >> >
> >> >> >I don't understand what you are saying.
> >> >> >
> >> >> >Do you approve this patch, or do you reject this patch?
> >> >> >
> >> >> It's not reject, it's proposition to use both, XDP and page pool patches,
> >> >> each having its goal.
> >> >
> >> >Just to be clear, if you want this patch to get accepted you have to
> >> >reply with your Signed-off-by (as I wrote).
> >> >
> >> >Maybe we should discuss it in another thread, about why you want two
> >> >solutions to the same problem.
> >>
> >> If it solves same problem I propose to reject this one and use this:
> >> https://lkml.org/lkml/2019/7/2/651
> >
> >No, I propose using this one, and rejecting the other one.
>
> There is at least several arguments against this one (related (2) purpose)
>
> It allows:
> - avoid changes to page_pool/mlx5/netsec
> - save not only allocator obj but allocator "page/buffer flush"
> - buffer flush can be present not only in page_pool but for other allocators
> that can behave differently and not so simple solution.
> - to not limit cpsw/(potentially others) to use "page_pool" allocator only
> ....
>
> This patch better leave also, as it simplifies error path for page_pool and
> have more error prone usage comparing with existent one.
>
> Please, don't limit cpsw and potentially other drivers to use only
> page_pool it can be zca or etc... I don't won't to modify each allocator.
> I propose to add both as by fact they solve different problems with common
> solution.
I'm trying to limit the scope of your changes, for your special case,
because I'm afraid this more common solution is going to limit our
options, painting ourselves into a corner.
E.g. for correct lifetime handling, I think we actually need to do a
dev_hold() on the net_device. (Changes in f71fec47c2 might not be
enough, but I first need to dig into the details and ask Hellwig about
some details). Adding that after your patch is more complicated (if
even doable).
E.g. doing dev_hold() on the net_device, can also turn into a
performance advantage, when/if page_pool is extended to also "travel"
into SKBs. (Allowing to elide such dev_hold() calls in netstack).
I also worry about the possible performance impact these changes will
have down the road. (For the RX/alloc side it should be clear by now
that we gain a lot of performance with the single RX-queue binding and
napi protection). On the return/free side performance *need* to be
improved (it doesn't scale). I'm basically looking at different ways
to bulk return pages into the ptr_ring, which requires changes in
page_pool and likely in xdp_allocator structure. Which your changes
are complicating.
This special use-case, seems confined to your driver. And Ilias told me
that XDP is not really a performance benefit for this driver as the HW
PPS-limit is hit before the XDP and netstack limit. I ask, does it
make sense to add XDP to this driver, if it complicates the code for
everybody else?
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
next prev parent reply other threads:[~2019-07-03 0:39 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-06-30 17:23 [PATCH v5 net-next 0/6] net: ethernet: ti: cpsw: Add XDP support Ivan Khoronzhuk
2019-06-30 17:23 ` [PATCH v5 net-next 1/6] xdp: allow same allocator usage Ivan Khoronzhuk
2019-07-01 11:40 ` Jesper Dangaard Brouer
2019-07-02 10:27 ` Ivan Khoronzhuk
2019-07-02 14:46 ` Jesper Dangaard Brouer
2019-07-02 14:53 ` Ivan Khoronzhuk
2019-06-30 17:23 ` [PATCH v5 net-next 2/6] net: ethernet: ti: davinci_cpdma: add dma mapped submit Ivan Khoronzhuk
2019-06-30 17:23 ` [PATCH v5 net-next 3/6] net: ethernet: ti: davinci_cpdma: return handler status Ivan Khoronzhuk
2019-06-30 17:23 ` [PATCH v5 net-next 4/6] net: ethernet: ti: davinci_cpdma: allow desc split while down Ivan Khoronzhuk
2019-06-30 17:23 ` [PATCH v5 net-next 5/6] net: ethernet: ti: cpsw_ethtool: allow res " Ivan Khoronzhuk
2019-06-30 17:23 ` [PATCH v5 net-next 6/6] net: ethernet: ti: cpsw: add XDP support Ivan Khoronzhuk
2019-07-01 16:19 ` Jesper Dangaard Brouer
2019-07-01 18:09 ` Ilias Apalodimas
2019-07-02 11:37 ` Ivan Khoronzhuk
2019-07-02 13:39 ` Jesper Dangaard Brouer
2019-07-02 14:24 ` Ivan Khoronzhuk
2019-07-02 14:31 ` [PATCH] net: core: page_pool: add user refcnt and reintroduce page_pool_destroy Jesper Dangaard Brouer
2019-07-02 14:44 ` Ivan Khoronzhuk
2019-07-02 14:52 ` Jesper Dangaard Brouer
2019-07-02 14:56 ` Ivan Khoronzhuk
2019-07-02 15:10 ` Jesper Dangaard Brouer
2019-07-02 15:21 ` Ivan Khoronzhuk
2019-07-02 18:29 ` Jesper Dangaard Brouer
2019-07-02 18:58 ` Ivan Khoronzhuk
2019-07-02 20:28 ` Ivan Khoronzhuk
2019-07-02 21:02 ` Jesper Dangaard Brouer [this message]
2019-07-02 21:15 ` Ilias Apalodimas
2019-07-02 21:41 ` Ivan Khoronzhuk
2019-07-03 7:26 ` [PATCH v5 net-next 6/6] net: ethernet: ti: cpsw: add XDP support Jesper Dangaard Brouer
2019-07-03 7:38 ` Ivan Khoronzhuk
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190702230241.3be6d787@carbon \
--to=brouer@redhat.com \
--cc=ast@kernel.org \
--cc=daniel@iogearbox.net \
--cc=grygorii.strashko@ti.com \
--cc=ilias.apalodimas@linaro.org \
--cc=ivan.khoronzhuk@linaro.org \
--cc=jakub.kicinski@netronome.com \
--cc=john.fastabend@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-omap@vger.kernel.org \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).