public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
* [PATCH v4 0/3] mm/vmalloc: free unused pages on vrealloc() shrink
@ 2026-03-14  9:04 Shivam Kalra via B4 Relay
  2026-03-14  9:04 ` [PATCH v4 1/3] mm/vmalloc: extract vm_area_free_pages() helper from vfree() Shivam Kalra via B4 Relay
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Shivam Kalra via B4 Relay @ 2026-03-14  9:04 UTC (permalink / raw)
  To: Andrew Morton, Uladzislau Rezki
  Cc: linux-mm, linux-kernel, Alice Ryhl, Danilo Krummrich,
	Shivam Kalra

This series implements the TODO in vrealloc() to unmap and free unused
pages when shrinking across a page boundary.

Problem:
When vrealloc() shrinks an allocation, it updates bookkeeping
(requested_size, KASAN shadow) but does not free the underlying physical
pages. This wastes memory for the lifetime of the allocation.

Solution:
- Patch 1: Extracts a vm_area_free_pages(vm, start, end) helper from
  vfree() that frees a range of pages with memcg and nr_vmalloc_pages
  accounting. Freed page pointers are set to NULL to prevent stale
  references.
- Patch 2: Uses the helper to free tail pages when vrealloc() shrinks
  across a page boundary. Skips huge page allocations (page_order > 0)
  since compound pages cannot be partially freed. Also fixes the
  grow-in-place path to check vm->nr_pages instead of
  get_vm_area_size(), which reflects the virtual reservation and does
  not change on shrink.
- Patch 3: Adds a vrealloc test case to lib/test_vmalloc that exercises
  grow-realloc, shrink-across-boundary, shrink-within-page, and
  grow-in-place paths with data integrity validation.

The virtual address reservation is kept intact to preserve the range
for potential future grow-in-place support.
A concrete user is the Rust binder driver's KVVec::shrink_to [1], which
performs explicit vrealloc() shrinks for memory reclamation.

Tested:
- KASAN KUnit (vmalloc_oob passes)
- lib/test_vmalloc stress tests (3/3, 1M iterations each)
- checkpatch, sparse, W=1, allmodconfig, coccicheck clean

[1] https://lore.kernel.org/all/20260216-binder-shrink-vec-v3-v6-0-ece8e8593e53@zohomail.in/

Signed-off-by: Shivam Kalra <shivamkalra98@zohomail.in>
---
Changes in v4:
- Rename vmalloc_free_pages() to vm_area_free_pages() to align with
  vm_area_alloc_pages() (Uladzislau Rezki)
- NULL out freed vm->pages[] entries to prevent stale pointers (Alice Ryhl)
- Remove redundant if (vm->nr_pages) guard in vfree() (Uladzislau Rezki)
- Add vrealloc test case to lib/test_vmalloc (new patch 3/3)
- Link to v3: https://lore.kernel.org/r/20260309-vmalloc-shrink-v3-0-5590fd8de2eb@zohomail.in

Changes in v3:
- Restore the comment.
- Rebase to the latest mm-new 
- Link to v2: https://lore.kernel.org/r/20260304-vmalloc-shrink-v2-0-28c291d60100@zohomail.in

Changes in v2:
- Updated the base-commit to mm-new
- Fix conflicts after rebase
- Ran `clang-format` on the changes made
- Use a single `kasan_vrealloc` (Alice Ryhl)
- Link to v1: https://lore.kernel.org/r/20260302-vmalloc-shrink-v1-0-46deff465b7e@zohomail.in

---
Shivam Kalra (3):
      mm/vmalloc: extract vm_area_free_pages() helper from vfree()
      mm/vmalloc: free unused pages on vrealloc() shrink
      lib/test_vmalloc: add vrealloc test case

 lib/test_vmalloc.c | 52 ++++++++++++++++++++++++++++++++++++++++++
 mm/vmalloc.c       | 66 ++++++++++++++++++++++++++++++++++++++----------------
 2 files changed, 99 insertions(+), 19 deletions(-)
---
base-commit: 593fab843afbd6800243552aebcc61d02d3cdcb2
change-id: 20260302-vmalloc-shrink-04b2fa688a14

Best regards,
-- 
Shivam Kalra <shivamkalra98@zohomail.in>




^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v4 1/3] mm/vmalloc: extract vm_area_free_pages() helper from vfree()
  2026-03-14  9:04 [PATCH v4 0/3] mm/vmalloc: free unused pages on vrealloc() shrink Shivam Kalra via B4 Relay
@ 2026-03-14  9:04 ` Shivam Kalra via B4 Relay
  2026-03-14  9:04 ` [PATCH v4 2/3] mm/vmalloc: free unused pages on vrealloc() shrink Shivam Kalra via B4 Relay
  2026-03-14  9:04 ` [PATCH v4 3/3] lib/test_vmalloc: add vrealloc test case Shivam Kalra via B4 Relay
  2 siblings, 0 replies; 6+ messages in thread
From: Shivam Kalra via B4 Relay @ 2026-03-14  9:04 UTC (permalink / raw)
  To: Andrew Morton, Uladzislau Rezki
  Cc: linux-mm, linux-kernel, Alice Ryhl, Danilo Krummrich,
	Shivam Kalra

From: Shivam Kalra <shivamkalra98@zohomail.in>

Extract the page-freeing loop and NR_VMALLOC stat accounting from
vfree() into a reusable vm_area_free_pages() helper. The helper operates
on a range [start, end) of pages from a vm_struct, making it suitable
for both full free (vfree) and partial free (upcoming vrealloc shrink).

Freed page pointers in vm->pages[] are set to NULL to prevent stale
references when the vm_struct outlives the free (as in vrealloc shrink).

Signed-off-by: Shivam Kalra <shivamkalra98@zohomail.in>
---
 mm/vmalloc.c | 47 +++++++++++++++++++++++++++++++++--------------
 1 file changed, 33 insertions(+), 14 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index c607307c657a..b29bf58c0e3f 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -3416,6 +3416,38 @@ void vfree_atomic(const void *addr)
 		schedule_work(&p->wq);
 }
 
+/*
+ * vm_area_free_pages - free a range of pages from a vmalloc allocation
+ * @vm: the vm_struct containing the pages
+ * @start: first page index to free (inclusive)
+ * @end: last page index to free (exclusive)
+ *
+ * Free pages [start, end) updating NR_VMALLOC stat accounting.
+ * Freed vm->pages[] entries are set to NULL.
+ * Caller is responsible for unmapping (vunmap_range) and KASAN
+ * poisoning before calling this.
+ */
+static void vm_area_free_pages(struct vm_struct *vm, unsigned int start,
+			       unsigned int end)
+{
+	unsigned int i;
+
+	for (i = start; i < end; i++) {
+		struct page *page = vm->pages[i];
+
+		BUG_ON(!page);
+		/*
+		 * High-order allocs for huge vmallocs are split, so
+		 * can be freed as an array of order-0 allocations
+		 */
+		if (!(vm->flags & VM_MAP_PUT_PAGES))
+			mod_lruvec_page_state(page, NR_VMALLOC, -1);
+		__free_page(page);
+		vm->pages[i] = NULL;
+		cond_resched();
+	}
+}
+
 /**
  * vfree - Release memory allocated by vmalloc()
  * @addr:  Memory base address
@@ -3436,7 +3468,6 @@ void vfree_atomic(const void *addr)
 void vfree(const void *addr)
 {
 	struct vm_struct *vm;
-	int i;
 
 	if (unlikely(in_interrupt())) {
 		vfree_atomic(addr);
@@ -3459,19 +3490,7 @@ void vfree(const void *addr)
 
 	if (unlikely(vm->flags & VM_FLUSH_RESET_PERMS))
 		vm_reset_perms(vm);
-	for (i = 0; i < vm->nr_pages; i++) {
-		struct page *page = vm->pages[i];
-
-		BUG_ON(!page);
-		/*
-		 * High-order allocs for huge vmallocs are split, so
-		 * can be freed as an array of order-0 allocations
-		 */
-		if (!(vm->flags & VM_MAP_PUT_PAGES))
-			mod_lruvec_page_state(page, NR_VMALLOC, -1);
-		__free_page(page);
-		cond_resched();
-	}
+	vm_area_free_pages(vm, 0, vm->nr_pages);
 	kvfree(vm->pages);
 	kfree(vm);
 }

-- 
2.43.0




^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v4 2/3] mm/vmalloc: free unused pages on vrealloc() shrink
  2026-03-14  9:04 [PATCH v4 0/3] mm/vmalloc: free unused pages on vrealloc() shrink Shivam Kalra via B4 Relay
  2026-03-14  9:04 ` [PATCH v4 1/3] mm/vmalloc: extract vm_area_free_pages() helper from vfree() Shivam Kalra via B4 Relay
@ 2026-03-14  9:04 ` Shivam Kalra via B4 Relay
  2026-03-16 17:12   ` Uladzislau Rezki
  2026-03-14  9:04 ` [PATCH v4 3/3] lib/test_vmalloc: add vrealloc test case Shivam Kalra via B4 Relay
  2 siblings, 1 reply; 6+ messages in thread
From: Shivam Kalra via B4 Relay @ 2026-03-14  9:04 UTC (permalink / raw)
  To: Andrew Morton, Uladzislau Rezki
  Cc: linux-mm, linux-kernel, Alice Ryhl, Danilo Krummrich,
	Shivam Kalra

From: Shivam Kalra <shivamkalra98@zohomail.in>

When vrealloc() shrinks an allocation and the new size crosses a page
boundary, unmap and free the tail pages that are no longer needed. This
reclaims physical memory that was previously wasted for the lifetime
of the allocation.

The heuristic is simple: always free when at least one full page becomes
unused. Huge page allocations (page_order > 0) are skipped, as partial
freeing would require splitting.

The virtual address reservation (vm->size / vmap_area) is intentionally
kept unchanged, preserving the address for potential future grow-in-place
support.

Fix the grow-in-place check to compare against vm->nr_pages rather than
get_vm_area_size(), since the latter reflects the virtual reservation
which does not shrink. Without this fix, a grow after shrink would
access freed pages.

Signed-off-by: Shivam Kalra <shivamkalra98@zohomail.in>
---
 mm/vmalloc.c | 19 ++++++++++++++-----
 1 file changed, 14 insertions(+), 5 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index b29bf58c0e3f..2c455f2038f6 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -4345,14 +4345,23 @@ void *vrealloc_node_align_noprof(const void *p, size_t size, unsigned long align
 			goto need_realloc;
 	}
 
-	/*
-	 * TODO: Shrink the vm_area, i.e. unmap and free unused pages. What
-	 * would be a good heuristic for when to shrink the vm_area?
-	 */
 	if (size <= old_size) {
+		unsigned int new_nr_pages = PAGE_ALIGN(size) >> PAGE_SHIFT;
+
 		/* Zero out "freed" memory, potentially for future realloc. */
 		if (want_init_on_free() || want_init_on_alloc(flags))
 			memset((void *)p + size, 0, old_size - size);
+
+		/* Free tail pages when shrink crosses a page boundary. */
+		if (new_nr_pages < vm->nr_pages && !vm_area_page_order(vm)) {
+			unsigned long addr = (unsigned long)p;
+
+			vunmap_range(addr + (new_nr_pages << PAGE_SHIFT),
+				     addr + (vm->nr_pages << PAGE_SHIFT));
+
+			vm_area_free_pages(vm, new_nr_pages, vm->nr_pages);
+			vm->nr_pages = new_nr_pages;
+		}
 		vm->requested_size = size;
 		kasan_vrealloc(p, old_size, size);
 		return (void *)p;
@@ -4361,7 +4370,7 @@ void *vrealloc_node_align_noprof(const void *p, size_t size, unsigned long align
 	/*
 	 * We already have the bytes available in the allocation; use them.
 	 */
-	if (size <= alloced_size) {
+	if (size <= (size_t)vm->nr_pages << PAGE_SHIFT) {
 		/*
 		 * No need to zero memory here, as unused memory will have
 		 * already been zeroed at initial allocation time or during

-- 
2.43.0




^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v4 3/3] lib/test_vmalloc: add vrealloc test case
  2026-03-14  9:04 [PATCH v4 0/3] mm/vmalloc: free unused pages on vrealloc() shrink Shivam Kalra via B4 Relay
  2026-03-14  9:04 ` [PATCH v4 1/3] mm/vmalloc: extract vm_area_free_pages() helper from vfree() Shivam Kalra via B4 Relay
  2026-03-14  9:04 ` [PATCH v4 2/3] mm/vmalloc: free unused pages on vrealloc() shrink Shivam Kalra via B4 Relay
@ 2026-03-14  9:04 ` Shivam Kalra via B4 Relay
  2 siblings, 0 replies; 6+ messages in thread
From: Shivam Kalra via B4 Relay @ 2026-03-14  9:04 UTC (permalink / raw)
  To: Andrew Morton, Uladzislau Rezki
  Cc: linux-mm, linux-kernel, Alice Ryhl, Danilo Krummrich,
	Shivam Kalra

From: Shivam Kalra <shivamkalra98@zohomail.in>

Introduce a new test case "vrealloc_test" that exercises the vrealloc()
shrink and in-place grow paths:

  - Grow beyond allocated pages (triggers full reallocation).
  - Shrink crossing a page boundary (frees tail pages).
  - Shrink within the same page (no page freeing).
  - Grow within the already allocated page count (in-place).

Data integrity is validated after each realloc step by checking that
the first byte of the original allocation is preserved.

The test is gated behind run_test_mask bit 12 (id 4096).

Signed-off-by: Shivam Kalra <shivamkalra98@zohomail.in>
---
 lib/test_vmalloc.c | 52 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 52 insertions(+)

diff --git a/lib/test_vmalloc.c b/lib/test_vmalloc.c
index 876c72c18a0c..ce2b2777a785 100644
--- a/lib/test_vmalloc.c
+++ b/lib/test_vmalloc.c
@@ -55,6 +55,7 @@ __param(int, run_test_mask, 7,
 		"\t\tid: 512,  name: kvfree_rcu_2_arg_vmalloc_test\n"
 		"\t\tid: 1024, name: vm_map_ram_test\n"
 		"\t\tid: 2048, name: no_block_alloc_test\n"
+		"\t\tid: 4096, name: vrealloc_test\n"
 		/* Add a new test case description here. */
 );
 
@@ -421,6 +422,56 @@ vm_map_ram_test(void)
 	return nr_allocated != map_nr_pages;
 }
 
+static int vrealloc_test(void)
+{
+	void *ptr;
+	int i;
+
+	for (i = 0; i < test_loop_count; i++) {
+		ptr = vmalloc(PAGE_SIZE);
+		if (!ptr)
+			return -1;
+
+		*((__u8 *)ptr) = 'a';
+
+		/* Grow: beyond allocated pages, triggers full realloc. */
+		ptr = vrealloc(ptr, 4 * PAGE_SIZE, GFP_KERNEL);
+		if (!ptr)
+			return -1;
+
+		if (*((__u8 *)ptr) != 'a')
+			return -1;
+
+		/* Shrink: crosses page boundary, frees tail pages. */
+		ptr = vrealloc(ptr, PAGE_SIZE, GFP_KERNEL);
+		if (!ptr)
+			return -1;
+
+		if (*((__u8 *)ptr) != 'a')
+			return -1;
+
+		/* Shrink: within same page, no page freeing. */
+		ptr = vrealloc(ptr, PAGE_SIZE / 2, GFP_KERNEL);
+		if (!ptr)
+			return -1;
+
+		if (*((__u8 *)ptr) != 'a')
+			return -1;
+
+		/* Grow: within allocated page, in-place, no realloc. */
+		ptr = vrealloc(ptr, PAGE_SIZE, GFP_KERNEL);
+		if (!ptr)
+			return -1;
+
+		if (*((__u8 *)ptr) != 'a')
+			return -1;
+
+		vfree(ptr);
+	}
+
+	return 0;
+}
+
 struct test_case_desc {
 	const char *test_name;
 	int (*test_func)(void);
@@ -440,6 +491,7 @@ static struct test_case_desc test_case_array[] = {
 	{ "kvfree_rcu_2_arg_vmalloc_test", kvfree_rcu_2_arg_vmalloc_test, },
 	{ "vm_map_ram_test", vm_map_ram_test, },
 	{ "no_block_alloc_test", no_block_alloc_test, true },
+	{ "vrealloc_test", vrealloc_test, },
 	/* Add a new test case here. */
 };
 

-- 
2.43.0




^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH v4 2/3] mm/vmalloc: free unused pages on vrealloc() shrink
  2026-03-14  9:04 ` [PATCH v4 2/3] mm/vmalloc: free unused pages on vrealloc() shrink Shivam Kalra via B4 Relay
@ 2026-03-16 17:12   ` Uladzislau Rezki
  2026-03-16 21:23     ` Shivam Kalra
  0 siblings, 1 reply; 6+ messages in thread
From: Uladzislau Rezki @ 2026-03-16 17:12 UTC (permalink / raw)
  To: shivamkalra98
  Cc: Andrew Morton, Uladzislau Rezki, linux-mm, linux-kernel,
	Alice Ryhl, Danilo Krummrich

On Sat, Mar 14, 2026 at 02:34:14PM +0530, Shivam Kalra via B4 Relay wrote:
> From: Shivam Kalra <shivamkalra98@zohomail.in>
> 
> When vrealloc() shrinks an allocation and the new size crosses a page
> boundary, unmap and free the tail pages that are no longer needed. This
> reclaims physical memory that was previously wasted for the lifetime
> of the allocation.
> 
> The heuristic is simple: always free when at least one full page becomes
> unused. Huge page allocations (page_order > 0) are skipped, as partial
> freeing would require splitting.
> 
> The virtual address reservation (vm->size / vmap_area) is intentionally
> kept unchanged, preserving the address for potential future grow-in-place
> support.
> 
> Fix the grow-in-place check to compare against vm->nr_pages rather than
> get_vm_area_size(), since the latter reflects the virtual reservation
> which does not shrink. Without this fix, a grow after shrink would
> access freed pages.
> 
> Signed-off-by: Shivam Kalra <shivamkalra98@zohomail.in>
> ---
>  mm/vmalloc.c | 19 ++++++++++++++-----
>  1 file changed, 14 insertions(+), 5 deletions(-)
> 
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index b29bf58c0e3f..2c455f2038f6 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -4345,14 +4345,23 @@ void *vrealloc_node_align_noprof(const void *p, size_t size, unsigned long align
>  			goto need_realloc;
>  	}
>  
> -	/*
> -	 * TODO: Shrink the vm_area, i.e. unmap and free unused pages. What
> -	 * would be a good heuristic for when to shrink the vm_area?
> -	 */
>  	if (size <= old_size) {
> +		unsigned int new_nr_pages = PAGE_ALIGN(size) >> PAGE_SHIFT;
> +
>  		/* Zero out "freed" memory, potentially for future realloc. */
>  		if (want_init_on_free() || want_init_on_alloc(flags))
>  			memset((void *)p + size, 0, old_size - size);
> +
> +		/* Free tail pages when shrink crosses a page boundary. */
> +		if (new_nr_pages < vm->nr_pages && !vm_area_page_order(vm)) {
> +			unsigned long addr = (unsigned long)p;
> +
> +			vunmap_range(addr + (new_nr_pages << PAGE_SHIFT),
> +				     addr + (vm->nr_pages << PAGE_SHIFT));
> +
> +			vm_area_free_pages(vm, new_nr_pages, vm->nr_pages);
> +			vm->nr_pages = new_nr_pages;
> +		}
>  		vm->requested_size = size;
>  		kasan_vrealloc(p, old_size, size);
>  		return (void *)p;
> @@ -4361,7 +4370,7 @@ void *vrealloc_node_align_noprof(const void *p, size_t size, unsigned long align
>  	/*
>  	 * We already have the bytes available in the allocation; use them.
>  	 */
> -	if (size <= alloced_size) {
> +	if (size <= (size_t)vm->nr_pages << PAGE_SHIFT) {
>  		/*
>  		 * No need to zero memory here, as unused memory will have
>  		 * already been zeroed at initial allocation time or during
> 
> -- 
> 2.43.0
> 
> 
Do we perform vm_reset_perms(vm) for tail pages? As i see you update the
vm->nr_pages when shrinking. Then on vfree() we have:

<snip>
/*
 * Flush the vm mapping and reset the direct map.
 */
static void vm_reset_perms(struct vm_struct *area)
{
	unsigned long start = ULONG_MAX, end = 0;
	unsigned int page_order = vm_area_page_order(area);
	int flush_dmap = 0;
	int i;

	/*
	 * Find the start and end range of the direct mappings to make sure that
	 * the vm_unmap_aliases() flush includes the direct map.
	 */
	for (i = 0; i < area->nr_pages; i += 1U << page_order) {
...
<snip>

i.e. tail pages go back to the page allocator without resetting permission.

--
Uladzslau Rezki


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v4 2/3] mm/vmalloc: free unused pages on vrealloc() shrink
  2026-03-16 17:12   ` Uladzislau Rezki
@ 2026-03-16 21:23     ` Shivam Kalra
  0 siblings, 0 replies; 6+ messages in thread
From: Shivam Kalra @ 2026-03-16 21:23 UTC (permalink / raw)
  To: Uladzislau Rezki
  Cc: Andrew Morton, linux-mm, linux-kernel, Alice Ryhl,
	Danilo Krummrich

On 16/03/26 22:42, Uladzislau Rezki wrote:
> On Sat, Mar 14, 2026 at 02:34:14PM +0530, Shivam Kalra via B4 Relay wrote:
>> From: Shivam Kalra <shivamkalra98@zohomail.in>
>>
>> When vrealloc() shrinks an allocation and the new size crosses a page
>> boundary, unmap and free the tail pages that are no longer needed. This
>> reclaims physical memory that was previously wasted for the lifetime
>> of the allocation.
>>
>> The heuristic is simple: always free when at least one full page becomes
>> unused. Huge page allocations (page_order > 0) are skipped, as partial
>> freeing would require splitting.
>>
>> The virtual address reservation (vm->size / vmap_area) is intentionally
>> kept unchanged, preserving the address for potential future grow-in-place
>> support.
>>
>> Fix the grow-in-place check to compare against vm->nr_pages rather than
>> get_vm_area_size(), since the latter reflects the virtual reservation
>> which does not shrink. Without this fix, a grow after shrink would
>> access freed pages.
>>
>> Signed-off-by: Shivam Kalra <shivamkalra98@zohomail.in>
>> ---
>>  mm/vmalloc.c | 19 ++++++++++++++-----
>>  1 file changed, 14 insertions(+), 5 deletions(-)
>>
>> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
>> index b29bf58c0e3f..2c455f2038f6 100644
>> --- a/mm/vmalloc.c
>> +++ b/mm/vmalloc.c
>> @@ -4345,14 +4345,23 @@ void *vrealloc_node_align_noprof(const void *p, size_t size, unsigned long align
>>  			goto need_realloc;
>>  	}
>>  
>> -	/*
>> -	 * TODO: Shrink the vm_area, i.e. unmap and free unused pages. What
>> -	 * would be a good heuristic for when to shrink the vm_area?
>> -	 */
>>  	if (size <= old_size) {
>> +		unsigned int new_nr_pages = PAGE_ALIGN(size) >> PAGE_SHIFT;
>> +
>>  		/* Zero out "freed" memory, potentially for future realloc. */
>>  		if (want_init_on_free() || want_init_on_alloc(flags))
>>  			memset((void *)p + size, 0, old_size - size);
>> +
>> +		/* Free tail pages when shrink crosses a page boundary. */
>> +		if (new_nr_pages < vm->nr_pages && !vm_area_page_order(vm)) {
>> +			unsigned long addr = (unsigned long)p;
>> +
>> +			vunmap_range(addr + (new_nr_pages << PAGE_SHIFT),
>> +				     addr + (vm->nr_pages << PAGE_SHIFT));
>> +
>> +			vm_area_free_pages(vm, new_nr_pages, vm->nr_pages);
>> +			vm->nr_pages = new_nr_pages;
>> +		}
>>  		vm->requested_size = size;
>>  		kasan_vrealloc(p, old_size, size);
>>  		return (void *)p;
>> @@ -4361,7 +4370,7 @@ void *vrealloc_node_align_noprof(const void *p, size_t size, unsigned long align
>>  	/*
>>  	 * We already have the bytes available in the allocation; use them.
>>  	 */
>> -	if (size <= alloced_size) {
>> +	if (size <= (size_t)vm->nr_pages << PAGE_SHIFT) {
>>  		/*
>>  		 * No need to zero memory here, as unused memory will have
>>  		 * already been zeroed at initial allocation time or during
>>
>> -- 
>> 2.43.0
>>
>>
> Do we perform vm_reset_perms(vm) for tail pages? As i see you update the
> vm->nr_pages when shrinking. Then on vfree() we have:
> 
> <snip>
> /*
>  * Flush the vm mapping and reset the direct map.
>  */
> static void vm_reset_perms(struct vm_struct *area)
> {
> 	unsigned long start = ULONG_MAX, end = 0;
> 	unsigned int page_order = vm_area_page_order(area);
> 	int flush_dmap = 0;
> 	int i;
> 
> 	/*
> 	 * Find the start and end range of the direct mappings to make sure that
> 	 * the vm_unmap_aliases() flush includes the direct map.
> 	 */
> 	for (i = 0; i < area->nr_pages; i += 1U << page_order) {
> ...
> <snip>
> 
> i.e. tail pages go back to the page allocator without resetting permission.
> 
> --
> Uladzslau Rezki
Hi Uladzislau,

Good catch, thank you for spotting this. You are absolutely right-we are
currently returning the tail pages to the page allocator without
resetting their direct-map permissions if VM_FLUSH_RESET_PERMS was set.

While my specific use case doesn't utilize VM_FLUSH_RESET_PERMS,
vrealloc needs to safely handle all vmalloc flags as a generic API.

I will fix this in the next version (v5). I plan to add a helper
function to perform the permission reset specifically for the range of
tail pages being freed during the shrink.

Thanks,
Shivam


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2026-03-16 21:23 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-14  9:04 [PATCH v4 0/3] mm/vmalloc: free unused pages on vrealloc() shrink Shivam Kalra via B4 Relay
2026-03-14  9:04 ` [PATCH v4 1/3] mm/vmalloc: extract vm_area_free_pages() helper from vfree() Shivam Kalra via B4 Relay
2026-03-14  9:04 ` [PATCH v4 2/3] mm/vmalloc: free unused pages on vrealloc() shrink Shivam Kalra via B4 Relay
2026-03-16 17:12   ` Uladzislau Rezki
2026-03-16 21:23     ` Shivam Kalra
2026-03-14  9:04 ` [PATCH v4 3/3] lib/test_vmalloc: add vrealloc test case Shivam Kalra via B4 Relay

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox