* Re: [PATCH 1/2] mm/vmalloc: free unused pages when shrinking vrealloc() allocation
@ 2026-05-09 4:11 Jill Ravaliya
0 siblings, 0 replies; 3+ messages in thread
From: Jill Ravaliya @ 2026-05-09 4:11 UTC (permalink / raw)
To: akpm; +Cc: Jill Ravaliya, urezki, linux-mm, linux-kernel
Thank you for the pointer to the AI review and for taking
the time to respond.
The review identified several real issues I missed:
- vunmap_range() called with equal start/end when
PAGE_ALIGN(size) == alloced_size (confirmed by syzbot)
- No handling for huge page allocations
- Missing vm_reset_perms() for VM_FLUSH_RESET_PERMS areas
- Lockless modification of vm->nr_pages races with
/proc/vmallocinfo readers
- vm->size left unmodified after shrink
Uladzislau pointed me to Shivam Kalra's v12 series which
correctly addresses all of these cases. I am withdrawing
my patches in favor of his work and studying his series
to understand the full complexity of a correct fix.
Jill Ravaliya
^ permalink raw reply [flat|nested] 3+ messages in thread
* [PATCH 1/2] mm/vmalloc: free unused pages when shrinking vrealloc() allocation
@ 2026-05-07 11:48 Jill Ravaliya
2026-05-07 17:17 ` Uladzislau Rezki
0 siblings, 1 reply; 3+ messages in thread
From: Jill Ravaliya @ 2026-05-07 11:48 UTC (permalink / raw)
To: akpm, urezki; +Cc: linux-mm, linux-kernel, Jill Ravaliya
vrealloc() shrink path zeros unused memory and updates
vm->requested_size, but never frees the physical pages,
removes page table mappings, or flushes the TLB for the
unused range.
When a caller shrinks a vmalloc allocation, physical pages
backing the unused portion remain allocated until vfree()
is eventually called, wasting real RAM.
Fix this by unmapping the unused virtual range using
vunmap_range() which also flushes the TLB, freeing each
unused physical page back to the buddy allocator, and
updating vm->nr_pages to reflect the new page count.
Signed-off-by: Jill Ravaliya <jillravaliya@gmail.com>
---
mm/vmalloc.c | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index aa08651ec..a8cedfc5d 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -4336,6 +4336,27 @@ void *vrealloc_node_align_noprof(const void *p, size_t size, unsigned long align
memset((void *)p + size, 0, old_size - size);
vm->requested_size = size;
kasan_vrealloc(p, old_size, size);
+
+ /* Shrink the vm_area: unmap and free unused pages. */
+ if (size < alloced_size) {
+ unsigned long new_nr_pages = PAGE_ALIGN(size) >> PAGE_SHIFT;
+ unsigned long i;
+
+ /* Unmap unused virtual range and flush TLB. */
+ vunmap_range((unsigned long)p + PAGE_ALIGN(size),
+ (unsigned long)p + alloced_size);
+
+ /* Free unused physical pages back to buddy allocator. */
+ for (i = new_nr_pages; i < vm->nr_pages; i++) {
+ mod_lruvec_page_state(vm->pages[i],
+ NR_VMALLOC, -1);
+ __free_page(vm->pages[i]);
+ vm->pages[i] = NULL;
+ }
+
+ vm->nr_pages = new_nr_pages;
+ }
+
return (void *)p;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 3+ messages in thread* Re: [PATCH 1/2] mm/vmalloc: free unused pages when shrinking vrealloc() allocation
2026-05-07 11:48 Jill Ravaliya
@ 2026-05-07 17:17 ` Uladzislau Rezki
0 siblings, 0 replies; 3+ messages in thread
From: Uladzislau Rezki @ 2026-05-07 17:17 UTC (permalink / raw)
To: Jill Ravaliya; +Cc: akpm, urezki, linux-mm, linux-kernel, Shivam Kalra
On Thu, May 07, 2026 at 05:18:53PM +0530, Jill Ravaliya wrote:
> vrealloc() shrink path zeros unused memory and updates
> vm->requested_size, but never frees the physical pages,
> removes page table mappings, or flushes the TLB for the
> unused range.
>
> When a caller shrinks a vmalloc allocation, physical pages
> backing the unused portion remain allocated until vfree()
> is eventually called, wasting real RAM.
>
> Fix this by unmapping the unused virtual range using
> vunmap_range() which also flushes the TLB, freeing each
> unused physical page back to the buddy allocator, and
> updating vm->nr_pages to reflect the new page count.
>
> Signed-off-by: Jill Ravaliya <jillravaliya@gmail.com>
> ---
> mm/vmalloc.c | 21 +++++++++++++++++++++
> 1 file changed, 21 insertions(+)
>
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index aa08651ec..a8cedfc5d 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -4336,6 +4336,27 @@ void *vrealloc_node_align_noprof(const void *p, size_t size, unsigned long align
> memset((void *)p + size, 0, old_size - size);
> vm->requested_size = size;
> kasan_vrealloc(p, old_size, size);
> +
> + /* Shrink the vm_area: unmap and free unused pages. */
> + if (size < alloced_size) {
> + unsigned long new_nr_pages = PAGE_ALIGN(size) >> PAGE_SHIFT;
> + unsigned long i;
> +
> + /* Unmap unused virtual range and flush TLB. */
> + vunmap_range((unsigned long)p + PAGE_ALIGN(size),
> + (unsigned long)p + alloced_size);
> +
> + /* Free unused physical pages back to buddy allocator. */
> + for (i = new_nr_pages; i < vm->nr_pages; i++) {
> + mod_lruvec_page_state(vm->pages[i],
> + NR_VMALLOC, -1);
> + __free_page(vm->pages[i]);
> + vm->pages[i] = NULL;
> + }
> +
> + vm->nr_pages = new_nr_pages;
> + }
> +
> return (void *)p;
> }
>
> --
> 2.43.0
>
There is already work to address this: https://lore.kernel.org/all/20260428-vmalloc-shrink-v12-0-3c18c9172eb1@zohomail.in/
--
Uladzislau Rezki
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2026-05-09 4:12 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-09 4:11 [PATCH 1/2] mm/vmalloc: free unused pages when shrinking vrealloc() allocation Jill Ravaliya
-- strict thread matches above, loose matches on Subject: below --
2026-05-07 11:48 Jill Ravaliya
2026-05-07 17:17 ` Uladzislau Rezki
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox