From: Tariq Toukan <ttoukan.linux@gmail.com>
To: Jakub Kicinski <kuba@kernel.org>, davem@davemloft.net
Cc: netdev@vger.kernel.org, edumazet@google.com, pabeni@redhat.com,
hawk@kernel.org, ilias.apalodimas@linaro.org,
linyunsheng@huawei.com, Dragos Tatulea <dtatulea@nvidia.com>
Subject: Re: [PATCH net-next 0/3] page_pool: allow caching from safely localized NAPI
Date: Wed, 12 Apr 2023 09:41:16 +0300 [thread overview]
Message-ID: <c433e59d-1262-656e-b08c-30adfc12edcc@gmail.com> (raw)
In-Reply-To: <20230411201800.596103-1-kuba@kernel.org>
Hi,
Happy holidays! I was traveling and couldn't review the RFCs.
Dragos and I had this task in our plans after our recent page pool work
in mlx5e. We're so glad to see this patchset! :)
On 11/04/2023 23:17, Jakub Kicinski wrote:
> I went back to the explicit "are we in NAPI method", mostly
> because I don't like having both around :( (even tho I maintain
> that in_softirq() && !in_hardirq() is as safe, as softirqs do
> not nest).
>
> Still returning the skbs to a CPU, tho, not to the NAPI instance.
> I reckon we could create a small refcounted struct per NAPI instance
> which would allow sockets and other users so hold a persisent
> and safe reference. But that's a bigger change, and I get 90+%
> recycling thru the cache with just these patches (for RR and
> streaming tests with 100% CPU use it's almost 100%).
>
> Some numbers for streaming test with 100% CPU use (from previous version,
> but really they perform the same):
>
> HW-GRO page=page
> before after before after
> recycle:
> cached: 0 138669686 0 150197505
> cache_full: 0 223391 0 74582
> ring: 138551933 9997191 149299454 0
> ring_full: 0 488 3154 127590
> released_refcnt: 0 0 0 0
>
Impressive.
Dragos tested your RFC v1.
He can test this one as well, expecting the same effect.
> alloc:
> fast: 136491361 148615710 146969587 150322859
> slow: 1772 1799 144 105
> slow_high_order: 0 0 0 0
> empty: 1772 1799 144 105
> refill: 2165245 156302 2332880 2128
> waive: 0 0 0 0
>
General note:
For fragmented page-pool pages, the decision to whether go though the
cache or the ring depends only on the last releasing thread and its context.
Our in-driver deferred release trick is in many cases beneficial even in
the presence of this idea. The sets of cases improved by each idea
intersect, but are not completely identical.
That's why we decided to go with both solutions working together, and
not only one.
> v1:
> - rename the arg in_normal_napi -> napi_safe
> - also allow recycling in __kfree_skb_defer()
> rfcv2: https://lore.kernel.org/all/20230405232100.103392-1-kuba@kernel.org/
>
> Jakub Kicinski (3):
> net: skb: plumb napi state thru skb freeing paths
> page_pool: allow caching from safely localized NAPI
> bnxt: hook NAPIs to page pools
>
> Documentation/networking/page_pool.rst | 1 +
> drivers/net/ethernet/broadcom/bnxt/bnxt.c | 1 +
> include/linux/netdevice.h | 3 ++
> include/linux/skbuff.h | 20 +++++++----
> include/net/page_pool.h | 3 +-
> net/core/dev.c | 3 ++
> net/core/page_pool.c | 15 ++++++--
> net/core/skbuff.c | 42 ++++++++++++-----------
> 8 files changed, 58 insertions(+), 30 deletions(-)
>
next prev parent reply other threads:[~2023-04-12 6:44 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-04-11 20:17 [PATCH net-next 0/3] page_pool: allow caching from safely localized NAPI Jakub Kicinski
2023-04-11 20:17 ` [PATCH net-next 1/3] net: skb: plumb napi state thru skb freeing paths Jakub Kicinski
2023-04-12 9:29 ` Jesper Dangaard Brouer
2023-04-13 4:20 ` Jakub Kicinski
2023-04-11 20:17 ` [PATCH net-next 2/3] page_pool: allow caching from safely localized NAPI Jakub Kicinski
2023-04-11 20:18 ` [PATCH net-next 3/3] bnxt: hook NAPIs to page pools Jakub Kicinski
2023-04-12 6:41 ` Tariq Toukan [this message]
2023-04-12 6:56 ` [PATCH net-next 0/3] page_pool: allow caching from safely localized NAPI Tariq Toukan
2023-04-13 4:18 ` Jakub Kicinski
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=c433e59d-1262-656e-b08c-30adfc12edcc@gmail.com \
--to=ttoukan.linux@gmail.com \
--cc=davem@davemloft.net \
--cc=dtatulea@nvidia.com \
--cc=edumazet@google.com \
--cc=hawk@kernel.org \
--cc=ilias.apalodimas@linaro.org \
--cc=kuba@kernel.org \
--cc=linyunsheng@huawei.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).