linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 0/6] Deep talk about folio vmap
@ 2025-03-27  9:28 Huan Yang
  2025-03-27  9:28 ` [RFC PATCH 1/6] udmabuf: try fix udmabuf vmap Huan Yang
                   ` (7 more replies)
  0 siblings, 8 replies; 21+ messages in thread
From: Huan Yang @ 2025-03-27  9:28 UTC (permalink / raw)
  To: bingbu.cao, Matthew Wilcox, Christoph Hellwig, Gerd Hoffmann,
	Vivek Kasireddy, Sumit Semwal, Christian König,
	Andrew Morton, Uladzislau Rezki, Shuah Khan, Huan Yang,
	linux-kernel, dri-devel, linux-media, linaro-mm-sig, linux-mm,
	linux-kselftest
  Cc: opensource.kernel

Bingbu reported an issue in [1] that udmabuf vmap failed and in [2], we
discussed the scenario of folio vmap due to the misuse of vmap_pfn
in udmabuf.

We reached the conclusion that vmap_pfn prohibits the use of page-based
PFNs:
Christoph Hellwig : 'No, vmap_pfn is entirely for memory not backed by
pages or folios, i.e. PCIe BARs and similar memory.  This must not be
mixed with proper folio backed memory.'

But udmabuf still need consider HVO based folio's vmap, and need fix
vmap issue. This RFC code want to show the two point that I mentioned
in [2], and more deep talk it:

Point1. simple copy vmap_pfn code, don't bother common vmap_pfn, use by
itself and remove pfn_valid check.

Point2. implement folio array based vmap(vmap_folios), which can given a
range of each folio(offset, nr_pages), so can suit HVO folio's vmap.

Patch 1-2 implement point1, and add a test simple set in udmabuf driver.
Patch 3-5 implement point2, also can test it.

Kasireddy also show that 'another option is to just limit udmabuf's vmap()
to only shmem folios'(This I guess folio_test_hugetlb_vmemmap_optimized
can help.)

But I prefer point2 to solution this issue, and IMO, folio based vmap still
need.

Compare to page based vmap(or pfn based), we need split each large folio
into single page struct, this need more large array struct and more longer
iter. If each tail page struct not exist(like HVO), can only use pfn vmap,
but there are no common api to do this.

In [2], we talked that udmabuf can use hugetlb as the memory
provider, and can give a range use. So if HVO used in hugetlb, each folio's
tail page may freed, so we can't use page based vmap, only can use pfn
based, which show in point1.

Further more, Folio based vmap only need record each folio(and offset,
nr_pages if range need). For 20MB vmap, page based need 5120 pages(40KB),
2MB folios only need 10 folio(80Byte).

Matthew show that Vishal also offered a folio based vmap - vmap_file[3].
This RFC patch want a range based folio, not only a full folio's map(like
file's folio), to resolve some problem like HVO's range folio vmap.

Please give me more suggestion.

Test case:
//enable/disable HVO
1. echo [1|0] > /proc/sys/vm/hugetlb_optimize_vmemmap
//prepare HUGETLB
2. echo 10 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
3. ./udmabuf_vmap
4. check output, and dmesg if any warn.

[1] https://lore.kernel.org/all/9172a601-c360-0d5b-ba1b-33deba430455@linux.intel.com/
[2] https://lore.kernel.org/lkml/20250312061513.1126496-1-link@vivo.com/
[3] https://lore.kernel.org/linux-mm/20250131001806.92349-1-vishal.moola@gmail.com/

Huan Yang (6):
  udmabuf: try fix udmabuf vmap
  udmabuf: try udmabuf vmap test
  mm/vmalloc: try add vmap folios range
  udmabuf: use vmap_range_folios
  udmabuf: vmap test suit for pages and pfns compare
  udmabuf: remove no need code

 drivers/dma-buf/udmabuf.c | 29 +++++++++-----------
 include/linux/vmalloc.h   | 57 +++++++++++++++++++++++++++++++++++++++
 mm/vmalloc.c              | 47 ++++++++++++++++++++++++++++++++
 3 files changed, 117 insertions(+), 16 deletions(-)

--
2.48.1



^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2025-04-07  9:49 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-03-27  9:28 [RFC PATCH 0/6] Deep talk about folio vmap Huan Yang
2025-03-27  9:28 ` [RFC PATCH 1/6] udmabuf: try fix udmabuf vmap Huan Yang
2025-03-27  9:28 ` [RFC PATCH 2/6] udmabuf: try udmabuf vmap test Huan Yang
2025-03-27  9:28 ` [RFC PATCH 3/6] mm/vmalloc: try add vmap folios range Huan Yang
2025-03-27  9:28 ` [RFC PATCH 4/6] udmabuf: use vmap_range_folios Huan Yang
2025-03-27  9:28 ` [RFC PATCH 5/6] udmabuf: vmap test suit for pages and pfns compare Huan Yang
2025-03-27  9:28 ` [RFC PATCH 6/6] udmabuf: remove no need code Huan Yang
2025-03-28 21:09 ` [RFC PATCH 0/6] Deep talk about folio vmap Vishal Moola (Oracle)
2025-04-04  9:01 ` CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP is broken, was " Christoph Hellwig
2025-04-04  9:38   ` Muchun Song
2025-04-04 10:07     ` Muchun Song
2025-04-07  1:59       ` Huan Yang
2025-04-07  2:57         ` Muchun Song
2025-04-07  3:21           ` Huan Yang
2025-04-07  3:37             ` Muchun Song
2025-04-07  6:43               ` Muchun Song
2025-04-07  7:09                 ` Huan Yang
2025-04-07  7:22                   ` Muchun Song
2025-04-07  8:55                     ` Huan Yang
2025-04-07  8:59                 ` Christoph Hellwig
2025-04-07  9:48                   ` Muchun Song

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).