From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-lf1-f45.google.com (mail-lf1-f45.google.com [209.85.167.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F357F3CFF54 for ; Mon, 16 Mar 2026 17:12:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.45 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773681162; cv=none; b=oVo01cEbUu2G/fuuiUbZetk7rCRuEt2AcBL+KctOWbArVq3P9rPUYOXP4VvZ99LOg2FPvHFsnX0NAgoWoeTbn8bhKLPHSodi50V5e/wBKY2fJeWDdDmtPUBGZAhFPE6hGhl1gu+lPIlgyHXmGjBmxERgmyDGdUSEWbuB1Nz1/30= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773681162; c=relaxed/simple; bh=NqEEQnwmJmhzDQmZkbcHtaRtt3Mwzauy9rV197HghVE=; h=From:Date:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=eqw7kN9VzicY4jrKX9hfG1tRjudMKDDYd3DbeGt4HVQRQxoZtIec+mTsk+TNksIRix+Z30CResRqZfibOr9trg7rhdvxQR+YkytJaxzb/kTZUvcr9U3F6ibMWinZ63LW58vjS6EjD3FWzqaRrpyjiq8M4L0P9eO9pz7y04o1gKA= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=I5RdIJvL; arc=none smtp.client-ip=209.85.167.45 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="I5RdIJvL" Received: by mail-lf1-f45.google.com with SMTP id 2adb3069b0e04-5a1282bc6aaso5150732e87.2 for ; Mon, 16 Mar 2026 10:12:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1773681159; x=1774285959; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:from:to:cc:subject:date:message-id:reply-to; bh=M6+A23HBU2e9zD5HhDvJ+x1NGZghXMudqlydZMoWGqQ=; b=I5RdIJvLgIqOIHTLQ1u08616kZwkk8w1K1bXVG3rZheqB8UuoNiRwy8HFXTd1NnLxm APHIH8SBAhNR8n8sNWv5bLK4lEKT9/mqGoU60TkH7tr+MN5qxhOK8EPpRD3byfP344hx aG4rnfr0JWvnYsmx9fqWE82ogdTSggcU+obXEXic0KV4IndU4pXX7aG49CEcHDKNnjeE zjpPHKAar/XBfDFKDFcxVRI3UivGKCIMhn4u0eurykIBqnxOJo83GNdlPfWyKtacfsM+ lyEKSvn6h4fxfClXzJb19Ke4Wnq5cGQnm9vT5aVryww/AfVTAyui6DEjFGRhthgbzdpx MkAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1773681159; x=1774285959; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:x-gm-gg:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=M6+A23HBU2e9zD5HhDvJ+x1NGZghXMudqlydZMoWGqQ=; b=DSja7+E92Yt9QS+Ir44WVa3bYh5wARmvCojVVRmVO4HIrbZU0nK5Xa62dKnHOR9lrD WZHB8B36ihp8fQuVxMs9CbVdg/R13u0lw37twQsWxXJgXD+foGeFzHPj5cLB2WM667ml uQfGF39v+VdYQrvTyh2Q8s9onxyE/yWDqNjG7rsCVn3G4/CJ56eJ+jJwO69+BVTXb+2W lJNG/rKCJmvJl7eGKUMFgdLcpTEUiB9y6iXf0y35KCwMy+qGoipTw2dT2o0ptqHZeq3j y3xoG3wKL9zH9ATvOphKhgi6lmn9yIZ/gereCDgr5/J8aDQv7LB7btg0NirFr7A3MIhH 2CPA== X-Forwarded-Encrypted: i=1; AJvYcCUJpE04RYZk4B0u/ywk9CYlB9BHnPj6CddZ+6Ya7C2YdJEISE+Eq7Feit+EcJehmjFZVZJoSDJQXilcBGE=@vger.kernel.org X-Gm-Message-State: AOJu0YyNmn5kOBwlD0qHOvW3Y+QQ1H/v3Pxewmm3FHgBoww2F4D2YOaS 35czFBx4hMSPd0BO2PuOCO2vWp6nFlYkVdd85xE+n72vyCab3BvcnZcX X-Gm-Gg: ATEYQzyxRYx3j+5p03y+EScXHFER2ELiFnR9lvTKeZorByE57UCFXfjmosM1X9Tfg0h nt8dQ/ygh/6oSgHCXqijIPA8kIQd1fFVvO2YX5f73IAKmZJEHW6TZ41WaFARTcKfGYE5f9XDFDF gl5WWhUfb0yR/bKIary0GkFDXZZq/bpF+gNwgTgm0+4IKtxb5HOVf+7rntBZMFUuHJ+522HC5bO 98+IozYYp6bR/jzyGtKxbroG0xOWgjlukj1mvOpjtqFknAWi9UAVqkHKAzdBIi803vYg0J5jKMB 4VPO5hSfSKjxH3tTJV+RlOnfhj3opdx1akOOafQilcgk2ZAuxpRBhCgCVHO7utTtvpoHimlvEo4 JMz0bN+vBmIeUXhSM64rH6ED+EHDlIIdubA7EuF5sEn8iwH4Jd/nZ79KwaJ61nrAu X-Received: by 2002:a05:6512:318c:b0:5a1:d06:573 with SMTP id 2adb3069b0e04-5a162b26175mr4681214e87.43.1773681158666; Mon, 16 Mar 2026 10:12:38 -0700 (PDT) Received: from milan ([2001:9b1:d5a0:a500::24b]) by smtp.gmail.com with ESMTPSA id 2adb3069b0e04-5a155f33c8bsm3785022e87.3.2026.03.16.10.12.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 16 Mar 2026 10:12:38 -0700 (PDT) From: Uladzislau Rezki X-Google-Original-From: Uladzislau Rezki Date: Mon, 16 Mar 2026 18:12:36 +0100 To: shivamkalra98@zohomail.in Cc: Andrew Morton , Uladzislau Rezki , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Alice Ryhl , Danilo Krummrich Subject: Re: [PATCH v4 2/3] mm/vmalloc: free unused pages on vrealloc() shrink Message-ID: References: <20260314-vmalloc-shrink-v4-0-c1e2e0bb5455@zohomail.in> <20260314-vmalloc-shrink-v4-2-c1e2e0bb5455@zohomail.in> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260314-vmalloc-shrink-v4-2-c1e2e0bb5455@zohomail.in> On Sat, Mar 14, 2026 at 02:34:14PM +0530, Shivam Kalra via B4 Relay wrote: > From: Shivam Kalra > > When vrealloc() shrinks an allocation and the new size crosses a page > boundary, unmap and free the tail pages that are no longer needed. This > reclaims physical memory that was previously wasted for the lifetime > of the allocation. > > The heuristic is simple: always free when at least one full page becomes > unused. Huge page allocations (page_order > 0) are skipped, as partial > freeing would require splitting. > > The virtual address reservation (vm->size / vmap_area) is intentionally > kept unchanged, preserving the address for potential future grow-in-place > support. > > Fix the grow-in-place check to compare against vm->nr_pages rather than > get_vm_area_size(), since the latter reflects the virtual reservation > which does not shrink. Without this fix, a grow after shrink would > access freed pages. > > Signed-off-by: Shivam Kalra > --- > mm/vmalloc.c | 19 ++++++++++++++----- > 1 file changed, 14 insertions(+), 5 deletions(-) > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index b29bf58c0e3f..2c455f2038f6 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -4345,14 +4345,23 @@ void *vrealloc_node_align_noprof(const void *p, size_t size, unsigned long align > goto need_realloc; > } > > - /* > - * TODO: Shrink the vm_area, i.e. unmap and free unused pages. What > - * would be a good heuristic for when to shrink the vm_area? > - */ > if (size <= old_size) { > + unsigned int new_nr_pages = PAGE_ALIGN(size) >> PAGE_SHIFT; > + > /* Zero out "freed" memory, potentially for future realloc. */ > if (want_init_on_free() || want_init_on_alloc(flags)) > memset((void *)p + size, 0, old_size - size); > + > + /* Free tail pages when shrink crosses a page boundary. */ > + if (new_nr_pages < vm->nr_pages && !vm_area_page_order(vm)) { > + unsigned long addr = (unsigned long)p; > + > + vunmap_range(addr + (new_nr_pages << PAGE_SHIFT), > + addr + (vm->nr_pages << PAGE_SHIFT)); > + > + vm_area_free_pages(vm, new_nr_pages, vm->nr_pages); > + vm->nr_pages = new_nr_pages; > + } > vm->requested_size = size; > kasan_vrealloc(p, old_size, size); > return (void *)p; > @@ -4361,7 +4370,7 @@ void *vrealloc_node_align_noprof(const void *p, size_t size, unsigned long align > /* > * We already have the bytes available in the allocation; use them. > */ > - if (size <= alloced_size) { > + if (size <= (size_t)vm->nr_pages << PAGE_SHIFT) { > /* > * No need to zero memory here, as unused memory will have > * already been zeroed at initial allocation time or during > > -- > 2.43.0 > > Do we perform vm_reset_perms(vm) for tail pages? As i see you update the vm->nr_pages when shrinking. Then on vfree() we have: /* * Flush the vm mapping and reset the direct map. */ static void vm_reset_perms(struct vm_struct *area) { unsigned long start = ULONG_MAX, end = 0; unsigned int page_order = vm_area_page_order(area); int flush_dmap = 0; int i; /* * Find the start and end range of the direct mappings to make sure that * the vm_unmap_aliases() flush includes the direct map. */ for (i = 0; i < area->nr_pages; i += 1U << page_order) { ... i.e. tail pages go back to the page allocator without resetting permission. -- Uladzslau Rezki