linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: Huan Yang <link@vivo.com>
Cc: bingbu.cao@linux.intel.com,
	"Matthew Wilcox" <willy@infradead.org>,
	"Christoph Hellwig" <hch@lst.de>,
	"Gerd Hoffmann" <kraxel@redhat.com>,
	"Vivek Kasireddy" <vivek.kasireddy@intel.com>,
	"Sumit Semwal" <sumit.semwal@linaro.org>,
	"Christian König" <christian.koenig@amd.com>,
	"Andrew Morton" <akpm@linux-foundation.org>,
	"Uladzislau Rezki" <urezki@gmail.com>,
	"Shuah Khan" <shuah@kernel.org>,
	linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org,
	linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org,
	linux-mm@kvack.org, linux-kselftest@vger.kernel.org,
	opensource.kernel@vivo.com
Subject: Re: [RFC PATCH 0/6] Deep talk about folio vmap
Date: Fri, 28 Mar 2025 14:09:20 -0700	[thread overview]
Message-ID: <Z-cQAIsh3dAhaT6s@fedora> (raw)
In-Reply-To: <20250327092922.536-1-link@vivo.com>

On Thu, Mar 27, 2025 at 05:28:27PM +0800, Huan Yang wrote:
> Bingbu reported an issue in [1] that udmabuf vmap failed and in [2], we
> discussed the scenario of folio vmap due to the misuse of vmap_pfn
> in udmabuf.
> 
> We reached the conclusion that vmap_pfn prohibits the use of page-based
> PFNs:
> Christoph Hellwig : 'No, vmap_pfn is entirely for memory not backed by
> pages or folios, i.e. PCIe BARs and similar memory.  This must not be
> mixed with proper folio backed memory.'
> 
> But udmabuf still need consider HVO based folio's vmap, and need fix
> vmap issue. This RFC code want to show the two point that I mentioned
> in [2], and more deep talk it:
> 
> Point1. simple copy vmap_pfn code, don't bother common vmap_pfn, use by
> itself and remove pfn_valid check.
> 
> Point2. implement folio array based vmap(vmap_folios), which can given a
> range of each folio(offset, nr_pages), so can suit HVO folio's vmap.
> 
> Patch 1-2 implement point1, and add a test simple set in udmabuf driver.
> Patch 3-5 implement point2, also can test it.
> 
> Kasireddy also show that 'another option is to just limit udmabuf's vmap()
> to only shmem folios'(This I guess folio_test_hugetlb_vmemmap_optimized
> can help.)
> 
> But I prefer point2 to solution this issue, and IMO, folio based vmap still
> need.
> 
> Compare to page based vmap(or pfn based), we need split each large folio
> into single page struct, this need more large array struct and more longer
> iter. If each tail page struct not exist(like HVO), can only use pfn vmap,
> but there are no common api to do this.
> 
> In [2], we talked that udmabuf can use hugetlb as the memory
> provider, and can give a range use. So if HVO used in hugetlb, each folio's
> tail page may freed, so we can't use page based vmap, only can use pfn
> based, which show in point1.
> 
> Further more, Folio based vmap only need record each folio(and offset,
> nr_pages if range need). For 20MB vmap, page based need 5120 pages(40KB),
> 2MB folios only need 10 folio(80Byte).
> 
> Matthew show that Vishal also offered a folio based vmap - vmap_file[3].
> This RFC patch want a range based folio, not only a full folio's map(like
> file's folio), to resolve some problem like HVO's range folio vmap.

Hmmm, I should've been more communicative, sorry about that. V1 was
poorly implemented, and I've had a V2 sitting around that does Exactly
what you want.

I'll send V2 to the mailing list and you can take a look at it;
preferrably you integrate that into this patchset instead (it would
make both the udma and vmalloc code much neater).

> Please give me more suggestion.
> 
> Test case:
> //enable/disable HVO
> 1. echo [1|0] > /proc/sys/vm/hugetlb_optimize_vmemmap
> //prepare HUGETLB
> 2. echo 10 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
> 3. ./udmabuf_vmap
> 4. check output, and dmesg if any warn.
> 
> [1] https://lore.kernel.org/all/9172a601-c360-0d5b-ba1b-33deba430455@linux.intel.com/
> [2] https://lore.kernel.org/lkml/20250312061513.1126496-1-link@vivo.com/
> [3] https://lore.kernel.org/linux-mm/20250131001806.92349-1-vishal.moola@gmail.com/
> 
> Huan Yang (6):
>   udmabuf: try fix udmabuf vmap
>   udmabuf: try udmabuf vmap test
>   mm/vmalloc: try add vmap folios range
>   udmabuf: use vmap_range_folios
>   udmabuf: vmap test suit for pages and pfns compare
>   udmabuf: remove no need code
> 
>  drivers/dma-buf/udmabuf.c | 29 +++++++++-----------
>  include/linux/vmalloc.h   | 57 +++++++++++++++++++++++++++++++++++++++
>  mm/vmalloc.c              | 47 ++++++++++++++++++++++++++++++++
>  3 files changed, 117 insertions(+), 16 deletions(-)
> 
> --
> 2.48.1
> 


  parent reply	other threads:[~2025-03-28 21:09 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-27  9:28 [RFC PATCH 0/6] Deep talk about folio vmap Huan Yang
2025-03-27  9:28 ` [RFC PATCH 1/6] udmabuf: try fix udmabuf vmap Huan Yang
2025-03-27  9:28 ` [RFC PATCH 2/6] udmabuf: try udmabuf vmap test Huan Yang
2025-03-27  9:28 ` [RFC PATCH 3/6] mm/vmalloc: try add vmap folios range Huan Yang
2025-03-27  9:28 ` [RFC PATCH 4/6] udmabuf: use vmap_range_folios Huan Yang
2025-03-27  9:28 ` [RFC PATCH 5/6] udmabuf: vmap test suit for pages and pfns compare Huan Yang
2025-03-27  9:28 ` [RFC PATCH 6/6] udmabuf: remove no need code Huan Yang
2025-03-28 21:09 ` Vishal Moola (Oracle) [this message]
2025-04-04  9:01 ` CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP is broken, was Re: [RFC PATCH 0/6] Deep talk about folio vmap Christoph Hellwig
2025-04-04  9:38   ` Muchun Song
2025-04-04 10:07     ` Muchun Song
2025-04-07  1:59       ` Huan Yang
2025-04-07  2:57         ` Muchun Song
2025-04-07  3:21           ` Huan Yang
2025-04-07  3:37             ` Muchun Song
2025-04-07  6:43               ` Muchun Song
2025-04-07  7:09                 ` Huan Yang
2025-04-07  7:22                   ` Muchun Song
2025-04-07  8:55                     ` Huan Yang
2025-04-07  8:59                 ` Christoph Hellwig
2025-04-07  9:48                   ` Muchun Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Z-cQAIsh3dAhaT6s@fedora \
    --to=vishal.moola@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=bingbu.cao@linux.intel.com \
    --cc=christian.koenig@amd.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=hch@lst.de \
    --cc=kraxel@redhat.com \
    --cc=linaro-mm-sig@lists.linaro.org \
    --cc=link@vivo.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-kselftest@vger.kernel.org \
    --cc=linux-media@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=opensource.kernel@vivo.com \
    --cc=shuah@kernel.org \
    --cc=sumit.semwal@linaro.org \
    --cc=urezki@gmail.com \
    --cc=vivek.kasireddy@intel.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).