From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-lj1-f175.google.com (mail-lj1-f175.google.com [209.85.208.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DE9F8322C73 for ; Thu, 7 May 2026 17:17:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.175 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778174280; cv=none; b=XYLkG7iNIFsexeNGIqD0gmDKMSzUR+zKWjqXVLk1zaSAzcnZeAZUG8nzQQAOlDBOs3nyQKKs1kEdyOlMkZIEgnX4ImJx3tFeUlVNV6J564J0OJgE/KkKcNP6c3XFleeciH4i0HjRXqkshEADRoo7aCPqQRSnTyXq28yxChpX+ZQ= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778174280; c=relaxed/simple; bh=oDB9PDMjyrOKkG/BjrsB8i2I8ulgJuLG24ZYyO37g9E=; h=From:Date:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=mWkdzaIMwUC6BRr8ce1Jv49aqYaJO6GOBeDOLYq43hTezKlAyxiRD/XGnFDBDpzx316VXN5w2Hn7klnxtvRD4HngTCPYyBlcNU769WwmepbCNZKtSe6AWqaXzA4VqtZERqW2VWrOpWQPeMTHErBgBLMtgS2ItqOO3DiMn4o1LOY= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ZG9AvwIU; arc=none smtp.client-ip=209.85.208.175 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ZG9AvwIU" Received: by mail-lj1-f175.google.com with SMTP id 38308e7fff4ca-393d6025f99so11924641fa.0 for ; Thu, 07 May 2026 10:17:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20251104; t=1778174277; x=1778779077; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:from:to:cc:subject:date:message-id:reply-to; bh=oQ95DAIAT7HQ2/6EmHLuku1/vpvwJv7aaYFyhQrbGh8=; b=ZG9AvwIUiHXKapQTpuwSJEGjFGNmKddcJ1VR2zyOlKvjwuucN2rTfLZhClbvYktCOJ x+D4GaICdRmNqN9352NSEucP4AbLTCmd0W6wHqFdeSZZ6GkO2FDUPPftJic9nElG6ZTj ai24e9JM3u7c2AtO9YUamyMe2COPUD2Bfe3MjtrgzgoyX85G9rBRCep6nNmGkboLwPnz 3Xl1Bq4VpCMWDp8qIjA34DYNHn26KGT/2v7fJKA0LFyVMTh5ZdlsN7BoyYLbP0b4Joke eitvq6f5gQn6RFuWwpYu3P3JRC3OEEldu5IpYu97dl5HsL4cVVucuanwDcPfrYYwl8Cu BWeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1778174277; x=1778779077; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:x-gm-gg:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=oQ95DAIAT7HQ2/6EmHLuku1/vpvwJv7aaYFyhQrbGh8=; b=ksvRjG4FTG2byD/uwKJ9VC9tGubqytOrhvwF8FNial7vLfkkC9YPj9Z0M5S937d1OG FeEXlu1fHDmUTF2mFuJ8YVFzBHwuweMpErFnFvdarBxsiIV8GQx3ZT94YiygdkCaygbZ DTaNBcwfUrK05fKJZ4Q4KxDNOs/ZAekYn1iM9cnx2vEj8p0rd4kJSWjE3OlNJV5n3bwf Gn31HhPQ7mV8lup2rhJkfUPBbP7R3QSIFH1kO/rKQbgQua36sZGOGs+4mVtgBGqiAVLQ QHHFRaM51YqLy01Yd9hpfvoigFMhUBjnzY75vPgHtbLGm1MK2BkttMuD5y5uzCagL0qD E3ow== X-Forwarded-Encrypted: i=1; AFNElJ+u4yX7KGQU8/AOg19nH6iG7v5sRvwaIJY36Zd8Db+V+wPb63pEJapjW/2ayUM1ZIBxHmhMGPpQ1OgH1IM=@vger.kernel.org X-Gm-Message-State: AOJu0YxBR0rhU3pRmoVtbs+u0yQjV2lYjoDOEJILtF96zhhEk6gWbxgI MS2BApSCqPw27n1yYitHy+KtkUk8pcBCLJ26Jv2Rpi7HkuvoBzhNpfWB X-Gm-Gg: AeBDietPNVRRtFim09uaRgBzCbgFgsrYVYA94/ql9T54A4AyrFdxnCVdbNUWW7k6f1b cnj8jJ4WJCEfZ/N5Ku0hIK70Y4Aj2/O968HkIKCP2Vkrp6xyR64d1QMUQAw0IyHeykejSDP/ly+ 1Ktf/i3SslQmXPWz3BBDm2LafUFOKfDYyAkIFyXqcNWFW8jRp8wEyko+WnUDxV8JTOZ9rCxBYvh FrmH2JzhpyHCsIH17JsZGofE3ZWgShW0N56/sIT805JdhwVFfml76fxgeHoYM890LZlGenwKRSC 13kd13ckBI8oGp3DrZvZhDu6jmxaHn5DnK5lcHFuKTYodbRVRB0BhR/bR8lq5wSvMyOMaD86+S8 cfT6wZZDOan0zw5EteBAu54FW8wBx5iSh8sm0ABPXUcdFtKQ2Q/UqNAeMaPHMloYc X-Received: by 2002:a05:651c:4114:b0:38a:3498:e2f5 with SMTP id 38308e7fff4ca-393db07a074mr7377371fa.15.1778174276795; Thu, 07 May 2026 10:17:56 -0700 (PDT) Received: from milan ([2001:9b1:d5a0:a500::24b]) by smtp.gmail.com with ESMTPSA id 38308e7fff4ca-393ef2c7aaesm857451fa.22.2026.05.07.10.17.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 May 2026 10:17:56 -0700 (PDT) From: Uladzislau Rezki X-Google-Original-From: Uladzislau Rezki Date: Thu, 7 May 2026 19:17:54 +0200 To: Jill Ravaliya Cc: akpm@linux-foundation.org, urezki@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Shivam Kalra Subject: Re: [PATCH 1/2] mm/vmalloc: free unused pages when shrinking vrealloc() allocation Message-ID: References: <20260507114854.41117-1-jillravaliya@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260507114854.41117-1-jillravaliya@gmail.com> On Thu, May 07, 2026 at 05:18:53PM +0530, Jill Ravaliya wrote: > vrealloc() shrink path zeros unused memory and updates > vm->requested_size, but never frees the physical pages, > removes page table mappings, or flushes the TLB for the > unused range. > > When a caller shrinks a vmalloc allocation, physical pages > backing the unused portion remain allocated until vfree() > is eventually called, wasting real RAM. > > Fix this by unmapping the unused virtual range using > vunmap_range() which also flushes the TLB, freeing each > unused physical page back to the buddy allocator, and > updating vm->nr_pages to reflect the new page count. > > Signed-off-by: Jill Ravaliya > --- > mm/vmalloc.c | 21 +++++++++++++++++++++ > 1 file changed, 21 insertions(+) > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index aa08651ec..a8cedfc5d 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -4336,6 +4336,27 @@ void *vrealloc_node_align_noprof(const void *p, size_t size, unsigned long align > memset((void *)p + size, 0, old_size - size); > vm->requested_size = size; > kasan_vrealloc(p, old_size, size); > + > + /* Shrink the vm_area: unmap and free unused pages. */ > + if (size < alloced_size) { > + unsigned long new_nr_pages = PAGE_ALIGN(size) >> PAGE_SHIFT; > + unsigned long i; > + > + /* Unmap unused virtual range and flush TLB. */ > + vunmap_range((unsigned long)p + PAGE_ALIGN(size), > + (unsigned long)p + alloced_size); > + > + /* Free unused physical pages back to buddy allocator. */ > + for (i = new_nr_pages; i < vm->nr_pages; i++) { > + mod_lruvec_page_state(vm->pages[i], > + NR_VMALLOC, -1); > + __free_page(vm->pages[i]); > + vm->pages[i] = NULL; > + } > + > + vm->nr_pages = new_nr_pages; > + } > + > return (void *)p; > } > > -- > 2.43.0 > There is already work to address this: https://lore.kernel.org/all/20260428-vmalloc-shrink-v12-0-3c18c9172eb1@zohomail.in/ -- Uladzislau Rezki