From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7ED56472782; Fri, 27 Feb 2026 20:11:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772223081; cv=none; b=OKgK/7cHFPocTQcqodpchbOKM7mMFwNu/DIv0UgdF3N+hWPZIzFbT0FqMZRdl+wdvE+OHIJtCg/55kOsIzmluWkskTsqMPyaGET3fGz0wdpCYgUAIffPTYjmBfjm66+RC13AtdA+xQ3Gce1bbEqZb+ByG2FZf294Tkf6tDnsZ2Y= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772223081; c=relaxed/simple; bh=Ms86RD0Otoj1d6fzyHR5o28Ex/YjZExC3FyxUcdS+A8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PuwmsR3lDvcvutfj5OE65RtDzhoVCQCNLmRnux4FIEPnVA7gKMf65IxbwF8MD8ONLGyQ4DsOTFCKKnERzcsGeCL8yBoNfUpPuRPULoWrsBZuqmZFzIwigaGZbwKuI4bB9jSjksFTCKksaH5dEq1Drbk3OkbF9qfRIp0yIExvNvE= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=PYP8hi5N; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="PYP8hi5N" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 11DAAC116C6; Fri, 27 Feb 2026 20:11:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1772223081; bh=Ms86RD0Otoj1d6fzyHR5o28Ex/YjZExC3FyxUcdS+A8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PYP8hi5N5sGdSXGeQeQx0PGG2qoBngPOdHIav4EUqj7du6tyiIkdIqI1w8m/NqulN yXTxgx+g53/5ilvSpYisK6g/8IOBFrnfp7OT3nq69Vw+lSxB4peBlnL7oBBrBFteR+ M2xZ9SjZBvkW0pqpQ/Xabhkha8T9/aSY5mi7v8CLxJ+sb4/YPmxDWucCxYnzccFDiZ EBFkzzUicUwOCfYz7Mnqy0dyy3jiaAtOm+CqJElfZ27AJo9RZVjXuSUjS95x5xGJ2w JLFZQYbU59MSVTJrtE+3FkB7Em2PBv3oHfoONvXgNLI0cd8vWkaGCUZ4/ZP0s9duB7 TSoGd5pRM7nbA== From: "David Hildenbrand (Arm)" To: linux-kernel@vger.kernel.org Cc: "linux-mm @ kvack . org" , "David Hildenbrand (Arm)" , Andrew Morton , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Jann Horn , Pedro Falcato , David Rientjes , Shakeel Butt , "Matthew Wilcox (Oracle)" , Alice Ryhl , Madhavan Srinivasan , Michael Ellerman , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Jarkko Sakkinen , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Greg Kroah-Hartman , =?UTF-8?q?Arve=20Hj=C3=B8nnev=C3=A5g?= , Todd Kjos , Christian Brauner , Carlos Llamas , Ian Abbott , H Hartley Sweeten , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Simona Vetter , Jason Gunthorpe , Leon Romanovsky , Dimitri Sivanich , Arnd Bergmann , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Peter Zijlstra , Arnaldo Carvalho de Melo , Namhyung Kim , Andy Lutomirski , Vincenzo Frascino , Eric Dumazet , Neal Cardwell , "David S. Miller" , David Ahern , Jakub Kicinski , Paolo Abeni , Miguel Ojeda , linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-fsdevel@vger.kernel.org, netdev@vger.kernel.org, rust-for-linux@vger.kernel.org, x86@kernel.org Subject: [PATCH v1 08/16] mm/memory: move adjusting of address range to unmap_vmas() Date: Fri, 27 Feb 2026 21:08:39 +0100 Message-ID: <20260227200848.114019-9-david@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260227200848.114019-1-david@kernel.org> References: <20260227200848.114019-1-david@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit __zap_vma_range() has two callers, whereby zap_page_range_single_batched() documents that the range must fit into the VMA range. So move adjusting the range to unmap_vmas() where it is actually required and add a safety check in __zap_vma_range() instead. In unmap_vmas(), we'd never expect to have empty ranges (otherwise, why have the vma in there in the first place). __zap_vma_range() will no longer be called with start == end, so cleanup the function a bit. While at it, simplify the overly long comment to its core message. We will no longer call uprobe_munmap() for start == end, which actually seems to be the right thing to do. Note that hugetlb_zap_begin()->...->adjust_range_if_pmd_sharing_possible() cannot result in the range exceeding the vma range. Signed-off-by: David Hildenbrand (Arm) --- mm/memory.c | 58 +++++++++++++++++++++-------------------------------- 1 file changed, 23 insertions(+), 35 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index f0aaec57a66b..fdcd2abf29c2 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2073,44 +2073,28 @@ static void unmap_page_range(struct mmu_gather *tlb, struct vm_area_struct *vma, tlb_end_vma(tlb, vma); } - -static void __zap_vma_range(struct mmu_gather *tlb, - struct vm_area_struct *vma, unsigned long start_addr, - unsigned long end_addr, struct zap_details *details) +static void __zap_vma_range(struct mmu_gather *tlb, struct vm_area_struct *vma, + unsigned long start, unsigned long end, + struct zap_details *details) { - unsigned long start = max(vma->vm_start, start_addr); - unsigned long end; - - if (start >= vma->vm_end) - return; - end = min(vma->vm_end, end_addr); - if (end <= vma->vm_start) - return; + VM_WARN_ON_ONCE(start >= end || !range_in_vma(vma, start, end)); if (vma->vm_file) uprobe_munmap(vma, start, end); - if (start != end) { - if (unlikely(is_vm_hugetlb_page(vma))) { - /* - * It is undesirable to test vma->vm_file as it - * should be non-null for valid hugetlb area. - * However, vm_file will be NULL in the error - * cleanup path of mmap_region. When - * hugetlbfs ->mmap method fails, - * mmap_region() nullifies vma->vm_file - * before calling this function to clean up. - * Since no pte has actually been setup, it is - * safe to do nothing in this case. - */ - if (vma->vm_file) { - zap_flags_t zap_flags = details ? - details->zap_flags : 0; - __unmap_hugepage_range(tlb, vma, start, end, - NULL, zap_flags); - } - } else - unmap_page_range(tlb, vma, start, end, details); + if (unlikely(is_vm_hugetlb_page(vma))) { + zap_flags_t zap_flags = details ? details->zap_flags : 0; + + /* + * vm_file will be NULL when we fail early while instantiating + * a new mapping. In this case, no pages were mapped yet and + * there is nothing to do. + */ + if (!vma->vm_file) + return; + __unmap_hugepage_range(tlb, vma, start, end, NULL, zap_flags); + } else { + unmap_page_range(tlb, vma, start, end, details); } } @@ -2174,8 +2158,9 @@ void unmap_vmas(struct mmu_gather *tlb, struct unmap_desc *unmap) unmap->vma_start, unmap->vma_end); mmu_notifier_invalidate_range_start(&range); do { - unsigned long start = unmap->vma_start; - unsigned long end = unmap->vma_end; + unsigned long start = max(vma->vm_start, unmap->vma_start); + unsigned long end = min(vma->vm_end, unmap->vma_end); + hugetlb_zap_begin(vma, &start, &end); __zap_vma_range(tlb, vma, start, end, &details); hugetlb_zap_end(vma, &details); @@ -2204,6 +2189,9 @@ void zap_page_range_single_batched(struct mmu_gather *tlb, VM_WARN_ON_ONCE(!tlb || tlb->mm != vma->vm_mm); + if (unlikely(!size)) + return; + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma->vm_mm, address, end); hugetlb_zap_begin(vma, &range.start, &range.end); -- 2.43.0