linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/3] mm/memfd: Reserve hugetlb folios before allocation
@ 2025-05-21  5:19 Vivek Kasireddy
  2025-05-21  5:19 ` [PATCH v3 1/3] mm/hugetlb: Make hugetlb_reserve_pages() return nr of entries updated Vivek Kasireddy
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Vivek Kasireddy @ 2025-05-21  5:19 UTC (permalink / raw)
  To: dri-devel, linux-mm
  Cc: Vivek Kasireddy, Gerd Hoffmann, Steve Sistare, Muchun Song,
	David Hildenbrand, Andrew Morton

There are cases where we try to pin a folio but discover that it has
not been faulted-in. So, we try to allocate it in memfd_alloc_folio()
but there is a chance that we might encounter a crash/failure
(VM_BUG_ON(!h->resv_huge_pages)) if there are no active reservations
at that instant. This issue was reported by syzbot.

Therefore, to avoid this situation and fix this issue, we just need
to make a reservation (by calling hugetlb_reserve_pages()) before
we try to allocate the folio. This will ensure that we are properly
doing region/subpool accounting associated with our allocation.

-----------------------------

Patchset overview:

Patch 1: Return nr of updated entries from hugetlb_reserve_pages()
Patch 2: Fix for VM_BUG_ON(!h->resv_huge_pages) crash reported by syzbot
Patch 3: New udmabuf selftest to invoke memfd_alloc_folio()

This series is tested by running the new udmabuf selftest introduced
in patch #3 along with the other selftests.

Changelog:
v2 -> v3:
- Call hugetlb_unreserve_pages() only if the reservation was actively
  (and successfully) made from memfd_alloc_folio() (David)

v1 -> v2:
- Replace VM_BUG_ON() with WARN_ON_ONCE() in the function
  alloc_hugetlb_folio_reserve() (David)
- Move the inline function subpool_inode() from hugetlb.c into the
  relevant header (hugetlb.h)
- Call hugetlb_unreserve_pages() if the folio cannot be added to
  the page cache as well
- Added a new udmabuf selftest to exercise the same path as that
  of syzbot

Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Steve Sistare <steven.sistare@oracle.com>
Cc: Muchun Song <muchun.song@linux.dev>
Cc: David Hildenbrand <david@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>

Vivek Kasireddy (3):
  mm/hugetlb: Make hugetlb_reserve_pages() return nr of entries updated
  mm/memfd: Reserve hugetlb folios before allocation
  selftests/udmabuf: Add a test to pin first before writing to memfd

 fs/hugetlbfs/inode.c                          |  8 ++---
 include/linux/hugetlb.h                       |  7 +++-
 mm/hugetlb.c                                  | 33 +++++++++++--------
 mm/memfd.c                                    | 17 ++++++++--
 .../selftests/drivers/dma-buf/udmabuf.c       | 20 ++++++++++-
 5 files changed, 62 insertions(+), 23 deletions(-)

-- 
2.49.0



^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v3 1/3] mm/hugetlb: Make hugetlb_reserve_pages() return nr of entries updated
  2025-05-21  5:19 [PATCH v3 0/3] mm/memfd: Reserve hugetlb folios before allocation Vivek Kasireddy
@ 2025-05-21  5:19 ` Vivek Kasireddy
  2025-06-04 23:35   ` Andrew Morton
  2025-05-21  5:19 ` [PATCH v3 2/3] mm/memfd: Reserve hugetlb folios before allocation Vivek Kasireddy
  2025-05-21  5:19 ` [PATCH v3 3/3] selftests/udmabuf: Add a test to pin first before writing to memfd Vivek Kasireddy
  2 siblings, 1 reply; 8+ messages in thread
From: Vivek Kasireddy @ 2025-05-21  5:19 UTC (permalink / raw)
  To: dri-devel, linux-mm
  Cc: Vivek Kasireddy, Steve Sistare, Muchun Song, David Hildenbrand,
	Andrew Morton

Currently, hugetlb_reserve_pages() returns a bool to indicate whether
the reservation map update for the range [from, to] was successful or
not. This is not sufficient for the case where the caller needs to
determine how many entries were updated for the range.

Therefore, have hugetlb_reserve_pages() return the number of entries
updated in the reservation map associated with the range [from, to].
Also, update the callers of hugetlb_reserve_pages() to handle the new
return value.

Cc: Steve Sistare <steven.sistare@oracle.com>
Cc: Muchun Song <muchun.song@linux.dev>
Cc: David Hildenbrand <david@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
---
 fs/hugetlbfs/inode.c    |  8 ++++----
 include/linux/hugetlb.h |  2 +-
 mm/hugetlb.c            | 19 +++++++++++++------
 3 files changed, 18 insertions(+), 11 deletions(-)

diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index e4de5425838d..00b2d1a032fd 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -150,10 +150,10 @@ static int hugetlbfs_file_mmap(struct file *file, struct vm_area_struct *vma)
 	if (inode->i_flags & S_PRIVATE)
 		vm_flags |= VM_NORESERVE;
 
-	if (!hugetlb_reserve_pages(inode,
+	if (hugetlb_reserve_pages(inode,
 				vma->vm_pgoff >> huge_page_order(h),
 				len >> huge_page_shift(h), vma,
-				vm_flags))
+				vm_flags) < 0)
 		goto out;
 
 	ret = 0;
@@ -1561,9 +1561,9 @@ struct file *hugetlb_file_setup(const char *name, size_t size,
 	inode->i_size = size;
 	clear_nlink(inode);
 
-	if (!hugetlb_reserve_pages(inode, 0,
+	if (hugetlb_reserve_pages(inode, 0,
 			size >> huge_page_shift(hstate_inode(inode)), NULL,
-			acctflag))
+			acctflag) < 0)
 		file = ERR_PTR(-ENOMEM);
 	else
 		file = alloc_file_pseudo(inode, mnt, name, O_RDWR,
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 8f3ac832ee7f..793d8390d3e4 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -148,7 +148,7 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte,
 			     uffd_flags_t flags,
 			     struct folio **foliop);
 #endif /* CONFIG_USERFAULTFD */
-bool hugetlb_reserve_pages(struct inode *inode, long from, long to,
+long hugetlb_reserve_pages(struct inode *inode, long from, long to,
 						struct vm_area_struct *vma,
 						vm_flags_t vm_flags);
 long hugetlb_unreserve_pages(struct inode *inode, long start, long end,
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 7ae38bfb9096..cba9d60a4e28 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -7241,8 +7241,15 @@ long hugetlb_change_protection(struct vm_area_struct *vma,
 	return pages > 0 ? (pages << h->order) : pages;
 }
 
-/* Return true if reservation was successful, false otherwise.  */
-bool hugetlb_reserve_pages(struct inode *inode,
+/*
+ * Update the reservation map for the range [from, to].
+ *
+ * Returns the number of entries that would be added to the reservation map
+ * associated with the range [from, to].  This number is greater or equal to
+ * zero. -EINVAL or -ENOMEM is returned in case of any errors.
+ */
+
+long hugetlb_reserve_pages(struct inode *inode,
 					long from, long to,
 					struct vm_area_struct *vma,
 					vm_flags_t vm_flags)
@@ -7257,7 +7264,7 @@ bool hugetlb_reserve_pages(struct inode *inode,
 	/* This should never happen */
 	if (from > to) {
 		VM_WARN(1, "%s called with a negative range\n", __func__);
-		return false;
+		return -EINVAL;
 	}
 
 	/*
@@ -7272,7 +7279,7 @@ bool hugetlb_reserve_pages(struct inode *inode,
 	 * without using reserves
 	 */
 	if (vm_flags & VM_NORESERVE)
-		return true;
+		return 0;
 
 	/*
 	 * Shared mappings base their reservation on the number of pages that
@@ -7379,7 +7386,7 @@ bool hugetlb_reserve_pages(struct inode *inode,
 			hugetlb_cgroup_put_rsvd_cgroup(h_cg);
 		}
 	}
-	return true;
+	return chg;
 
 out_put_pages:
 	spool_resv = chg - gbl_reserve;
@@ -7407,7 +7414,7 @@ bool hugetlb_reserve_pages(struct inode *inode,
 		kref_put(&resv_map->refs, resv_map_release);
 		set_vma_resv_map(vma, NULL);
 	}
-	return false;
+	return chg < 0 ? chg : add < 0 ? add : -EINVAL;
 }
 
 long hugetlb_unreserve_pages(struct inode *inode, long start, long end,
-- 
2.49.0



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v3 2/3] mm/memfd: Reserve hugetlb folios before allocation
  2025-05-21  5:19 [PATCH v3 0/3] mm/memfd: Reserve hugetlb folios before allocation Vivek Kasireddy
  2025-05-21  5:19 ` [PATCH v3 1/3] mm/hugetlb: Make hugetlb_reserve_pages() return nr of entries updated Vivek Kasireddy
@ 2025-05-21  5:19 ` Vivek Kasireddy
  2025-05-21  5:19 ` [PATCH v3 3/3] selftests/udmabuf: Add a test to pin first before writing to memfd Vivek Kasireddy
  2 siblings, 0 replies; 8+ messages in thread
From: Vivek Kasireddy @ 2025-05-21  5:19 UTC (permalink / raw)
  To: dri-devel, linux-mm
  Cc: Vivek Kasireddy, syzbot+a504cb5bae4fe117ba94, Steve Sistare,
	Muchun Song, David Hildenbrand, Andrew Morton

There are cases when we try to pin a folio but discover that it has
not been faulted-in. So, we try to allocate it in memfd_alloc_folio()
but there is a chance that we might encounter a crash/failure
(VM_BUG_ON(!h->resv_huge_pages)) if there are no active reservations
at that instant. This issue was reported by syzbot:

kernel BUG at mm/hugetlb.c:2403!
Oops: invalid opcode: 0000 [#1] PREEMPT SMP KASAN NOPTI
CPU: 0 UID: 0 PID: 5315 Comm: syz.0.0 Not tainted
6.13.0-rc5-syzkaller-00161-g63676eefb7a0 #0
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS
1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014
RIP: 0010:alloc_hugetlb_folio_reserve+0xbc/0xc0 mm/hugetlb.c:2403
Code: 1f eb 05 e8 56 18 a0 ff 48 c7 c7 40 56 61 8e e8 ba 21 cc 09 4c 89
f0 5b 41 5c 41 5e 41 5f 5d c3 cc cc cc cc e8 35 18 a0 ff 90 <0f> 0b 66
90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 f3 0f
RSP: 0018:ffffc9000d3d77f8 EFLAGS: 00010087
RAX: ffffffff81ff6beb RBX: 0000000000000000 RCX: 0000000000100000
RDX: ffffc9000e51a000 RSI: 00000000000003ec RDI: 00000000000003ed
RBP: 1ffffffff34810d9 R08: ffffffff81ff6ba3 R09: 1ffffd4000093005
R10: dffffc0000000000 R11: fffff94000093006 R12: dffffc0000000000
R13: dffffc0000000000 R14: ffffea0000498000 R15: ffffffff9a4086c8
FS:  00007f77ac12e6c0(0000) GS:ffff88801fc00000(0000)
knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f77ab54b170 CR3: 0000000040b70000 CR4: 0000000000352ef0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <TASK>
 memfd_alloc_folio+0x1bd/0x370 mm/memfd.c:88
 memfd_pin_folios+0xf10/0x1570 mm/gup.c:3750
 udmabuf_pin_folios drivers/dma-buf/udmabuf.c:346 [inline]
 udmabuf_create+0x70e/0x10c0 drivers/dma-buf/udmabuf.c:443
 udmabuf_ioctl_create drivers/dma-buf/udmabuf.c:495 [inline]
 udmabuf_ioctl+0x301/0x4e0 drivers/dma-buf/udmabuf.c:526
 vfs_ioctl fs/ioctl.c:51 [inline]
 __do_sys_ioctl fs/ioctl.c:906 [inline]
 __se_sys_ioctl+0xf5/0x170 fs/ioctl.c:892
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f

Therefore, to avoid this situation and fix this issue, we just need
to make a reservation (by calling hugetlb_reserve_pages()) before
we try to allocate the folio. This will ensure that we are properly
doing region/subpool accounting associated with our allocation.

While at it, move subpool_inode() into hugetlb header and also
replace the VM_BUG_ON() with WARN_ON_ONCE() as there is no need to
crash the system in this scenario and instead we could just warn
and fail the allocation.

Fixes: 26a8ea80929c ("mm/hugetlb: fix memfd_pin_folios resv_huge_pages leak")
Reported-by: syzbot+a504cb5bae4fe117ba94@syzkaller.appspotmail.com
Signed-off-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
Cc: Steve Sistare <steven.sistare@oracle.com>
Cc: Muchun Song <muchun.song@linux.dev>
Cc: David Hildenbrand <david@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
---
 include/linux/hugetlb.h |  5 +++++
 mm/hugetlb.c            | 14 ++++++--------
 mm/memfd.c              | 17 ++++++++++++++---
 3 files changed, 25 insertions(+), 11 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 793d8390d3e4..ca3d6a3acae1 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -729,6 +729,11 @@ extern unsigned int default_hstate_idx;
 
 #define default_hstate (hstates[default_hstate_idx])
 
+static inline struct hugepage_subpool *subpool_inode(struct inode *inode)
+{
+	return HUGETLBFS_SB(inode->i_sb)->spool;
+}
+
 static inline struct hugepage_subpool *hugetlb_folio_subpool(struct folio *folio)
 {
 	return folio->_hugetlb_subpool;
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index cba9d60a4e28..6a9b701586eb 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -283,11 +283,6 @@ static long hugepage_subpool_put_pages(struct hugepage_subpool *spool,
 	return ret;
 }
 
-static inline struct hugepage_subpool *subpool_inode(struct inode *inode)
-{
-	return HUGETLBFS_SB(inode->i_sb)->spool;
-}
-
 static inline struct hugepage_subpool *subpool_vma(struct vm_area_struct *vma)
 {
 	return subpool_inode(file_inode(vma->vm_file));
@@ -2354,12 +2349,15 @@ struct folio *alloc_hugetlb_folio_reserve(struct hstate *h, int preferred_nid,
 	struct folio *folio;
 
 	spin_lock_irq(&hugetlb_lock);
+	if (WARN_ON_ONCE(!h->resv_huge_pages)) {
+		spin_unlock_irq(&hugetlb_lock);
+		return NULL;
+	}
+
 	folio = dequeue_hugetlb_folio_nodemask(h, gfp_mask, preferred_nid,
 					       nmask);
-	if (folio) {
-		VM_BUG_ON(!h->resv_huge_pages);
+	if (folio)
 		h->resv_huge_pages--;
-	}
 
 	spin_unlock_irq(&hugetlb_lock);
 	return folio;
diff --git a/mm/memfd.c b/mm/memfd.c
index c64df1343059..783f61de5784 100644
--- a/mm/memfd.c
+++ b/mm/memfd.c
@@ -70,7 +70,6 @@ struct folio *memfd_alloc_folio(struct file *memfd, pgoff_t idx)
 #ifdef CONFIG_HUGETLB_PAGE
 	struct folio *folio;
 	gfp_t gfp_mask;
-	int err;
 
 	if (is_file_hugepages(memfd)) {
 		/*
@@ -79,12 +78,19 @@ struct folio *memfd_alloc_folio(struct file *memfd, pgoff_t idx)
 		 * alloc from. Also, the folio will be pinned for an indefinite
 		 * amount of time, so it is not expected to be migrated away.
 		 */
+		struct inode *inode = file_inode(memfd);
 		struct hstate *h = hstate_file(memfd);
+		int err = -ENOMEM;
+		long nr_resv;
 
 		gfp_mask = htlb_alloc_mask(h);
 		gfp_mask &= ~(__GFP_HIGHMEM | __GFP_MOVABLE);
 		idx >>= huge_page_order(h);
 
+		nr_resv = hugetlb_reserve_pages(inode, idx, idx + 1, NULL, 0);
+		if (nr_resv < 0)
+			return ERR_PTR(nr_resv);
+
 		folio = alloc_hugetlb_folio_reserve(h,
 						    numa_node_id(),
 						    NULL,
@@ -95,12 +101,17 @@ struct folio *memfd_alloc_folio(struct file *memfd, pgoff_t idx)
 							idx);
 			if (err) {
 				folio_put(folio);
-				return ERR_PTR(err);
+				goto err_unresv;
 			}
+
+			hugetlb_set_folio_subpool(folio, subpool_inode(inode));
 			folio_unlock(folio);
 			return folio;
 		}
-		return ERR_PTR(-ENOMEM);
+err_unresv:
+		if (nr_resv > 0)
+			hugetlb_unreserve_pages(inode, idx, idx + 1, 0);
+		return ERR_PTR(err);
 	}
 #endif
 	return shmem_read_folio(memfd->f_mapping, idx);
-- 
2.49.0



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v3 3/3] selftests/udmabuf: Add a test to pin first before writing to memfd
  2025-05-21  5:19 [PATCH v3 0/3] mm/memfd: Reserve hugetlb folios before allocation Vivek Kasireddy
  2025-05-21  5:19 ` [PATCH v3 1/3] mm/hugetlb: Make hugetlb_reserve_pages() return nr of entries updated Vivek Kasireddy
  2025-05-21  5:19 ` [PATCH v3 2/3] mm/memfd: Reserve hugetlb folios before allocation Vivek Kasireddy
@ 2025-05-21  5:19 ` Vivek Kasireddy
  2 siblings, 0 replies; 8+ messages in thread
From: Vivek Kasireddy @ 2025-05-21  5:19 UTC (permalink / raw)
  To: dri-devel, linux-mm
  Cc: Vivek Kasireddy, Gerd Hoffmann, Steve Sistare, Muchun Song,
	David Hildenbrand, Andrew Morton

Unlike the existing tests, this new test will create a memfd (backed
by hugetlb) and pin the folios in it (a small subset) before writing/
populating it with data. This is a valid use-case that invokes the
memfd_alloc_folio() kernel API and is expected to work unless there
aren't enough hugetlb folios to satisfy the allocation needs.

Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Steve Sistare <steven.sistare@oracle.com>
Cc: Muchun Song <muchun.song@linux.dev>
Cc: David Hildenbrand <david@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
---
 .../selftests/drivers/dma-buf/udmabuf.c       | 20 ++++++++++++++++++-
 1 file changed, 19 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/drivers/dma-buf/udmabuf.c b/tools/testing/selftests/drivers/dma-buf/udmabuf.c
index 6062723a172e..77aa2897e79f 100644
--- a/tools/testing/selftests/drivers/dma-buf/udmabuf.c
+++ b/tools/testing/selftests/drivers/dma-buf/udmabuf.c
@@ -138,7 +138,7 @@ int main(int argc, char *argv[])
 	void *addr1, *addr2;
 
 	ksft_print_header();
-	ksft_set_plan(6);
+	ksft_set_plan(7);
 
 	devfd = open("/dev/udmabuf", O_RDWR);
 	if (devfd < 0) {
@@ -248,6 +248,24 @@ int main(int argc, char *argv[])
 	else
 		ksft_test_result_pass("%s: [PASS,test-6]\n", TEST_PREFIX);
 
+	close(buf);
+	close(memfd);
+
+	/* same test as above but we pin first before writing to memfd */
+	page_size = getpagesize() * 512; /* 2 MB */
+	size = MEMFD_SIZE * page_size;
+	memfd = create_memfd_with_seals(size, true);
+	buf = create_udmabuf_list(devfd, memfd, size);
+	addr2 = mmap_fd(buf, NUM_PAGES * NUM_ENTRIES * getpagesize());
+	addr1 = mmap_fd(memfd, size);
+	write_to_memfd(addr1, size, 'a');
+	write_to_memfd(addr1, size, 'b');
+	ret = compare_chunks(addr1, addr2, size);
+	if (ret < 0)
+		ksft_test_result_fail("%s: [FAIL,test-7]\n", TEST_PREFIX);
+	else
+		ksft_test_result_pass("%s: [PASS,test-7]\n", TEST_PREFIX);
+
 	close(buf);
 	close(memfd);
 	close(devfd);
-- 
2.49.0



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 1/3] mm/hugetlb: Make hugetlb_reserve_pages() return nr of entries updated
  2025-05-21  5:19 ` [PATCH v3 1/3] mm/hugetlb: Make hugetlb_reserve_pages() return nr of entries updated Vivek Kasireddy
@ 2025-06-04 23:35   ` Andrew Morton
  2025-06-06  6:14     ` Kasireddy, Vivek
  0 siblings, 1 reply; 8+ messages in thread
From: Andrew Morton @ 2025-06-04 23:35 UTC (permalink / raw)
  To: Vivek Kasireddy
  Cc: dri-devel, linux-mm, Steve Sistare, Muchun Song,
	David Hildenbrand

On Tue, 20 May 2025 22:19:35 -0700 Vivek Kasireddy <vivek.kasireddy@intel.com> wrote:

> Currently, hugetlb_reserve_pages() returns a bool to indicate whether
> the reservation map update for the range [from, to] was successful or
> not. This is not sufficient for the case where the caller needs to
> determine how many entries were updated for the range.
> 
> Therefore, have hugetlb_reserve_pages() return the number of entries
> updated in the reservation map associated with the range [from, to].
> Also, update the callers of hugetlb_reserve_pages() to handle the new
> return value.

Everyone has forgotten, so please refresh, retest and resend after
-rc1?

Also, patch [2/3] addresses a BUG which was introduced into 6.12. 
Presumably we want to backport the fix into -stable?  If so, it's
better to present this as a standalone patch, including the cc:stable
tag.  This is because I'd be looking to fast-track the fix into the
6.16-rcX cycle whereas less urgent things would be routed into
6.17-rc1.

Also, [2/3] has

	Reported-by: syzbot+a504cb5bae4fe117ba94@syzkaller.appspotmail.com

which is kind of annoying if one wishes to see the syzbot report.  OK,
it takes takes 30 seconds of googling, but adding a Closes: link is
nice.



^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: [PATCH v3 1/3] mm/hugetlb: Make hugetlb_reserve_pages() return nr of entries updated
  2025-06-04 23:35   ` Andrew Morton
@ 2025-06-06  6:14     ` Kasireddy, Vivek
  2025-06-08  0:25       ` Andrew Morton
  0 siblings, 1 reply; 8+ messages in thread
From: Kasireddy, Vivek @ 2025-06-06  6:14 UTC (permalink / raw)
  To: Andrew Morton
  Cc: dri-devel@lists.freedesktop.org, linux-mm@kvack.org,
	Steve Sistare, Muchun Song, David Hildenbrand

Hi Andrew,

> Subject: Re: [PATCH v3 1/3] mm/hugetlb: Make hugetlb_reserve_pages()
> return nr of entries updated
> 
> On Tue, 20 May 2025 22:19:35 -0700 Vivek Kasireddy
> <vivek.kasireddy@intel.com> wrote:
> 
> > Currently, hugetlb_reserve_pages() returns a bool to indicate whether
> > the reservation map update for the range [from, to] was successful or
> > not. This is not sufficient for the case where the caller needs to
> > determine how many entries were updated for the range.
> >
> > Therefore, have hugetlb_reserve_pages() return the number of entries
> > updated in the reservation map associated with the range [from, to].
> > Also, update the callers of hugetlb_reserve_pages() to handle the new
> > return value.
> 
> Everyone has forgotten, so please refresh, retest and resend after
> -rc1?
Sure, will do.

> 
> Also, patch [2/3] addresses a BUG which was introduced into 6.12.
> Presumably we want to backport the fix into -stable?  If so, it's
> better to present this as a standalone patch, including the cc:stable
> tag.  This is because I'd be looking to fast-track the fix into the
> 6.16-rcX cycle whereas less urgent things would be routed into
> 6.17-rc1.
Unless I merge patches #1 and #2, I don't think I can come up with a
standalone fix to address the BUG. So, I don't mind having this short
series deferred until 6.17-rc1.

> 
> Also, [2/3] has
> 
> 	Reported-by:
> syzbot+a504cb5bae4fe117ba94@syzkaller.appspotmail.com
> 
> which is kind of annoying if one wishes to see the syzbot report.  OK,
> it takes takes 30 seconds of googling, but adding a Closes: link is
> nice.
Ok, no problem, I'll add the Closes link in the next version.

Thanks,
Vivek




^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 1/3] mm/hugetlb: Make hugetlb_reserve_pages() return nr of entries updated
  2025-06-06  6:14     ` Kasireddy, Vivek
@ 2025-06-08  0:25       ` Andrew Morton
  2025-06-11  5:05         ` Kasireddy, Vivek
  0 siblings, 1 reply; 8+ messages in thread
From: Andrew Morton @ 2025-06-08  0:25 UTC (permalink / raw)
  To: Kasireddy, Vivek
  Cc: dri-devel@lists.freedesktop.org, linux-mm@kvack.org,
	Steve Sistare, Muchun Song, David Hildenbrand

On Fri, 6 Jun 2025 06:14:06 +0000 "Kasireddy, Vivek" <vivek.kasireddy@intel.com> wrote:

> > Also, patch [2/3] addresses a BUG which was introduced into 6.12.
> > Presumably we want to backport the fix into -stable?  If so, it's
> > better to present this as a standalone patch, including the cc:stable
> > tag.  This is because I'd be looking to fast-track the fix into the
> > 6.16-rcX cycle whereas less urgent things would be routed into
> > 6.17-rc1.
> Unless I merge patches #1 and #2, I don't think I can come up with a
> standalone fix to address the BUG. So, I don't mind having this short
> series deferred until 6.17-rc1.

If I understand correctly, we have a way in which unprivileged
userspace can trigger a BUG.  Unless we're very lucky, this wrecks the
running kernel.  So fixing this in shipped kernels is very important.

So if I indeed understand correctly, please try to find a minimal fix
which is suitable for backporting and then, as a separate series,
propose any changes which you think would improve things going forward.

Thanks.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: [PATCH v3 1/3] mm/hugetlb: Make hugetlb_reserve_pages() return nr of entries updated
  2025-06-08  0:25       ` Andrew Morton
@ 2025-06-11  5:05         ` Kasireddy, Vivek
  0 siblings, 0 replies; 8+ messages in thread
From: Kasireddy, Vivek @ 2025-06-11  5:05 UTC (permalink / raw)
  To: Andrew Morton
  Cc: dri-devel@lists.freedesktop.org, linux-mm@kvack.org,
	Steve Sistare, Muchun Song, David Hildenbrand

Hi Andrew,

> Subject: Re: [PATCH v3 1/3] mm/hugetlb: Make hugetlb_reserve_pages()
> return nr of entries updated
> 
> On Fri, 6 Jun 2025 06:14:06 +0000 "Kasireddy, Vivek"
> <vivek.kasireddy@intel.com> wrote:
> 
> > > Also, patch [2/3] addresses a BUG which was introduced into 6.12.
> > > Presumably we want to backport the fix into -stable?  If so, it's
> > > better to present this as a standalone patch, including the cc:stable
> > > tag.  This is because I'd be looking to fast-track the fix into the
> > > 6.16-rcX cycle whereas less urgent things would be routed into
> > > 6.17-rc1.
> > Unless I merge patches #1 and #2, I don't think I can come up with a
> > standalone fix to address the BUG. So, I don't mind having this short
> > series deferred until 6.17-rc1.
> 
> If I understand correctly, we have a way in which unprivileged
> userspace can trigger a BUG.  Unless we're very lucky, this wrecks the
> running kernel.  So fixing this in shipped kernels is very important.
> 
> So if I indeed understand correctly, please try to find a minimal fix
> which is suitable for backporting and then, as a separate series,
> propose any changes which you think would improve things going forward.
Ok, I'll try to come up with a standalone patch to fix the reported BUG 
and separate out other changes into different patches.

Thanks,
Vivek

> 
> Thanks.


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2025-06-11  5:05 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-05-21  5:19 [PATCH v3 0/3] mm/memfd: Reserve hugetlb folios before allocation Vivek Kasireddy
2025-05-21  5:19 ` [PATCH v3 1/3] mm/hugetlb: Make hugetlb_reserve_pages() return nr of entries updated Vivek Kasireddy
2025-06-04 23:35   ` Andrew Morton
2025-06-06  6:14     ` Kasireddy, Vivek
2025-06-08  0:25       ` Andrew Morton
2025-06-11  5:05         ` Kasireddy, Vivek
2025-05-21  5:19 ` [PATCH v3 2/3] mm/memfd: Reserve hugetlb folios before allocation Vivek Kasireddy
2025-05-21  5:19 ` [PATCH v3 3/3] selftests/udmabuf: Add a test to pin first before writing to memfd Vivek Kasireddy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).