From: Jesper Dangaard Brouer <jbrouer@redhat.com>
To: Yunsheng Lin <linyunsheng@huawei.com>,
Jakub Kicinski <kuba@kernel.org>,
Jesper Dangaard Brouer <jbrouer@redhat.com>
Cc: brouer@redhat.com, netdev@vger.kernel.org,
almasrymina@google.com, hawk@kernel.org,
ilias.apalodimas@linaro.org, edumazet@google.com,
dsahern@gmail.com, michael.chan@broadcom.com, willemb@google.com
Subject: Re: [RFC 00/12] net: huge page backed page_pool
Date: Wed, 12 Jul 2023 14:43:32 +0200 [thread overview]
Message-ID: <8b50a49e-5df8-dccd-154e-4423f0e8eda5@redhat.com> (raw)
In-Reply-To: <edf4f724-0c0e-c6ae-ffcb-ec1336448e59@huawei.com>
On 12/07/2023 13.47, Yunsheng Lin wrote:
> On 2023/7/12 8:08, Jakub Kicinski wrote:
>> On Tue, 11 Jul 2023 17:49:19 +0200 Jesper Dangaard Brouer wrote:
>>> I see you have discovered that the next bottleneck are the IOTLB misses.
>>> One of the techniques for reducing IOTLB misses is using huge pages.
>>> Called "super-pages" in article (below), and they report that this trick
>>> doesn't work on AMD (Pacifica arch).
>>>
>>> I think you have convinced me that the pp_provider idea makes sense for
>>> *this* use-case, because it feels like natural to extend PP with
>>> mitigations for IOTLB misses. (But I'm not 100% sure it fits Mina's
>>> use-case).
>>
>> We're on the same page then (no pun intended).
>>
>>> What is your page refcnt strategy for these huge-pages. I assume this
>>> rely on PP frags-scheme, e.g. using page->pp_frag_count.
>>> Is this correctly understood?
>>
>> Oh, I split the page into individual 4k pages after DMA mapping.
>> There's no need for the host memory to be a huge page. I mean,
>> the actual kernel identity mapping is a huge page AFAIU, and the
>> struct pages are allocated, anyway. We just need it to be a huge
>> page at DMA mapping time.
>>
>> So the pages from the huge page provider only differ from normal
>> alloc_page() pages by the fact that they are a part of a 1G DMA
>> mapping.
So, Jakub you are saying the PP refcnt's are still done "as usual" on
individual pages.
>
> If it is about DMA mapping, is it possible to use dma_map_sg()
> to enable a big continuous dma map for a lot of discontinuous
> 4k pages to avoid allocating big huge page?
>
> As the comment:
> "The scatter gather list elements are merged together (if possible)
> and tagged with the appropriate dma address and length."
>
> https://elixir.free-electrons.com/linux/v4.16.18/source/arch/arm/mm/dma-mapping.c#L1805
>
This is interesting for two reasons.
(1) if this DMA merging helps IOTLB misses (?)
(2) PP could use dma_map_sg() to amortize dma_map call cost.
For case (2) __page_pool_alloc_pages_slow() already does bulk allocation
of pages (alloc_pages_bulk_array_node()), and then loops over the pages
to DMA map them individually. It seems like an obvious win to use
dma_map_sg() here?
--Jesper
next prev parent reply other threads:[~2023-07-12 12:43 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-07 18:39 [RFC 00/12] net: huge page backed page_pool Jakub Kicinski
2023-07-07 18:39 ` [RFC 01/12] net: hack together some page sharing Jakub Kicinski
2023-07-07 18:39 ` [RFC 02/12] net: create a 1G-huge-page-backed allocator Jakub Kicinski
2023-07-07 18:39 ` [RFC 03/12] net: page_pool: hide page_pool_release_page() Jakub Kicinski
2023-07-07 18:39 ` [RFC 04/12] net: page_pool: merge page_pool_release_page() with page_pool_return_page() Jakub Kicinski
2023-07-10 16:07 ` Jesper Dangaard Brouer
2023-07-07 18:39 ` [RFC 05/12] net: page_pool: factor out releasing DMA from releasing the page Jakub Kicinski
2023-07-07 18:39 ` [RFC 06/12] net: page_pool: create hooks for custom page providers Jakub Kicinski
2023-07-07 19:50 ` Mina Almasry
2023-07-07 22:28 ` Jakub Kicinski
2023-07-07 18:39 ` [RFC 07/12] net: page_pool: add huge page backed memory providers Jakub Kicinski
2023-07-07 18:39 ` [RFC 08/12] eth: bnxt: let the page pool manage the DMA mapping Jakub Kicinski
2023-07-10 10:12 ` Jesper Dangaard Brouer
2023-07-26 6:56 ` Ilias Apalodimas
2023-07-07 18:39 ` [RFC 09/12] eth: bnxt: use the page pool for data pages Jakub Kicinski
2023-07-10 4:22 ` Michael Chan
2023-07-10 17:04 ` Jakub Kicinski
2023-07-07 18:39 ` [RFC 10/12] eth: bnxt: make sure we make for recycle skbs before freeing them Jakub Kicinski
2023-07-07 18:39 ` [RFC 11/12] eth: bnxt: wrap coherent allocations into helpers Jakub Kicinski
2023-07-07 18:39 ` [RFC 12/12] eth: bnxt: hack in the use of MEP Jakub Kicinski
2023-07-07 19:45 ` [RFC 00/12] net: huge page backed page_pool Mina Almasry
2023-07-07 22:45 ` Jakub Kicinski
2023-07-10 17:31 ` Mina Almasry
2023-07-11 15:49 ` Jesper Dangaard Brouer
2023-07-12 0:08 ` Jakub Kicinski
2023-07-12 11:47 ` Yunsheng Lin
2023-07-12 12:43 ` Jesper Dangaard Brouer [this message]
2023-07-12 17:01 ` Jakub Kicinski
2023-07-14 13:05 ` Yunsheng Lin
2023-07-12 14:00 ` Jesper Dangaard Brouer
2023-07-12 17:19 ` Jakub Kicinski
2023-07-13 10:07 ` Jesper Dangaard Brouer
2023-07-13 16:27 ` Jakub Kicinski
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=8b50a49e-5df8-dccd-154e-4423f0e8eda5@redhat.com \
--to=jbrouer@redhat.com \
--cc=almasrymina@google.com \
--cc=brouer@redhat.com \
--cc=dsahern@gmail.com \
--cc=edumazet@google.com \
--cc=hawk@kernel.org \
--cc=ilias.apalodimas@linaro.org \
--cc=kuba@kernel.org \
--cc=linyunsheng@huawei.com \
--cc=michael.chan@broadcom.com \
--cc=netdev@vger.kernel.org \
--cc=willemb@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).