From: Jakub Kicinski <kuba@kernel.org>
To: Jesper Dangaard Brouer <jbrouer@redhat.com>
Cc: Yunsheng Lin <linyunsheng@huawei.com>,
brouer@redhat.com, netdev@vger.kernel.org,
almasrymina@google.com, hawk@kernel.org,
ilias.apalodimas@linaro.org, edumazet@google.com,
dsahern@gmail.com, michael.chan@broadcom.com, willemb@google.com
Subject: Re: [RFC 00/12] net: huge page backed page_pool
Date: Wed, 12 Jul 2023 10:01:08 -0700 [thread overview]
Message-ID: <20230712100108.00bee44f@kernel.org> (raw)
In-Reply-To: <8b50a49e-5df8-dccd-154e-4423f0e8eda5@redhat.com>
On Wed, 12 Jul 2023 14:43:32 +0200 Jesper Dangaard Brouer wrote:
> On 12/07/2023 13.47, Yunsheng Lin wrote:
> > On 2023/7/12 8:08, Jakub Kicinski wrote:
> >> Oh, I split the page into individual 4k pages after DMA mapping.
> >> There's no need for the host memory to be a huge page. I mean,
> >> the actual kernel identity mapping is a huge page AFAIU, and the
> >> struct pages are allocated, anyway. We just need it to be a huge
> >> page at DMA mapping time.
> >>
> >> So the pages from the huge page provider only differ from normal
> >> alloc_page() pages by the fact that they are a part of a 1G DMA
> >> mapping.
>
> So, Jakub you are saying the PP refcnt's are still done "as usual" on
> individual pages.
Yes - other than coming from a specific 1G of physical memory
the resulting pages are really pretty ordinary 4k pages.
> > If it is about DMA mapping, is it possible to use dma_map_sg()
> > to enable a big continuous dma map for a lot of discontinuous
> > 4k pages to avoid allocating big huge page?
> >
> > As the comment:
> > "The scatter gather list elements are merged together (if possible)
> > and tagged with the appropriate dma address and length."
> >
> > https://elixir.free-electrons.com/linux/v4.16.18/source/arch/arm/mm/dma-mapping.c#L1805
> >
>
> This is interesting for two reasons.
>
> (1) if this DMA merging helps IOTLB misses (?)
Maybe I misunderstand how IOMMU / virtual addressing works, but I don't
see how one can merge mappings from physically non-contiguous pages.
IOW we can't get 1G-worth of random 4k pages and hope that thru some
magic they get strung together and share an IOTLB entry (if that's
where Yunsheng's suggestion was going..)
> (2) PP could use dma_map_sg() to amortize dma_map call cost.
>
> For case (2) __page_pool_alloc_pages_slow() already does bulk allocation
> of pages (alloc_pages_bulk_array_node()), and then loops over the pages
> to DMA map them individually. It seems like an obvious win to use
> dma_map_sg() here?
That could well be worth investigating!
next prev parent reply other threads:[~2023-07-12 17:01 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-07 18:39 [RFC 00/12] net: huge page backed page_pool Jakub Kicinski
2023-07-07 18:39 ` [RFC 01/12] net: hack together some page sharing Jakub Kicinski
2023-07-07 18:39 ` [RFC 02/12] net: create a 1G-huge-page-backed allocator Jakub Kicinski
2023-07-07 18:39 ` [RFC 03/12] net: page_pool: hide page_pool_release_page() Jakub Kicinski
2023-07-07 18:39 ` [RFC 04/12] net: page_pool: merge page_pool_release_page() with page_pool_return_page() Jakub Kicinski
2023-07-10 16:07 ` Jesper Dangaard Brouer
2023-07-07 18:39 ` [RFC 05/12] net: page_pool: factor out releasing DMA from releasing the page Jakub Kicinski
2023-07-07 18:39 ` [RFC 06/12] net: page_pool: create hooks for custom page providers Jakub Kicinski
2023-07-07 19:50 ` Mina Almasry
2023-07-07 22:28 ` Jakub Kicinski
2023-07-07 18:39 ` [RFC 07/12] net: page_pool: add huge page backed memory providers Jakub Kicinski
2023-07-07 18:39 ` [RFC 08/12] eth: bnxt: let the page pool manage the DMA mapping Jakub Kicinski
2023-07-10 10:12 ` Jesper Dangaard Brouer
2023-07-26 6:56 ` Ilias Apalodimas
2023-07-07 18:39 ` [RFC 09/12] eth: bnxt: use the page pool for data pages Jakub Kicinski
2023-07-10 4:22 ` Michael Chan
2023-07-10 17:04 ` Jakub Kicinski
2023-07-07 18:39 ` [RFC 10/12] eth: bnxt: make sure we make for recycle skbs before freeing them Jakub Kicinski
2023-07-07 18:39 ` [RFC 11/12] eth: bnxt: wrap coherent allocations into helpers Jakub Kicinski
2023-07-07 18:39 ` [RFC 12/12] eth: bnxt: hack in the use of MEP Jakub Kicinski
2023-07-07 19:45 ` [RFC 00/12] net: huge page backed page_pool Mina Almasry
2023-07-07 22:45 ` Jakub Kicinski
2023-07-10 17:31 ` Mina Almasry
2023-07-11 15:49 ` Jesper Dangaard Brouer
2023-07-12 0:08 ` Jakub Kicinski
2023-07-12 11:47 ` Yunsheng Lin
2023-07-12 12:43 ` Jesper Dangaard Brouer
2023-07-12 17:01 ` Jakub Kicinski [this message]
2023-07-14 13:05 ` Yunsheng Lin
2023-07-12 14:00 ` Jesper Dangaard Brouer
2023-07-12 17:19 ` Jakub Kicinski
2023-07-13 10:07 ` Jesper Dangaard Brouer
2023-07-13 16:27 ` Jakub Kicinski
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230712100108.00bee44f@kernel.org \
--to=kuba@kernel.org \
--cc=almasrymina@google.com \
--cc=brouer@redhat.com \
--cc=dsahern@gmail.com \
--cc=edumazet@google.com \
--cc=hawk@kernel.org \
--cc=ilias.apalodimas@linaro.org \
--cc=jbrouer@redhat.com \
--cc=linyunsheng@huawei.com \
--cc=michael.chan@broadcom.com \
--cc=netdev@vger.kernel.org \
--cc=willemb@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).