From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5B7F2FCA180 for ; Mon, 9 Mar 2026 18:57:00 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id E430710E5A2; Mon, 9 Mar 2026 18:56:54 +0000 (UTC) Authentication-Results: gabe.freedesktop.org; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.b="LzK0cQVs"; dkim-atps=neutral Received: from sea.source.kernel.org (sea.source.kernel.org [172.234.252.31]) by gabe.freedesktop.org (Postfix) with ESMTPS id 579CF10E19E; Mon, 9 Mar 2026 16:46:14 +0000 (UTC) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id 09D5440E3A; Mon, 9 Mar 2026 16:46:14 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 49200C2BC86; Mon, 9 Mar 2026 16:46:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773074773; bh=Tqx7dK7zOG7rDfy0y5qpLKhV/OEzGrhs5r7m6DAJi38=; h=From:To:Cc:Subject:In-Reply-To:References:Date:From; b=LzK0cQVs8ML2i3sxYVbeGEnQp3w8SiK+QCVVNiljg5JvL/h1NF64fnF/LeKgIGdiK FOGHzG/pM6V0WQ0+b8SQ0t/UMvJOUMcusbuLVmT/L9OqD1faOj86rMgMiOHjdxvqGI yIQ/9OIKCWBpkH4MHWrgENng96n48RfZlIt9izmWjO4cti60m/ZwnFqRYcL5mCDTwP DfuORlXZKGVaEPeUO4naijPeyrfCDna/PZ7pWLYmdFjFehFa05pbi7AVJIw8tnnEyc ZTPJPpl/MNPND8YXYuNEC+NesxXKrtqgA8aH5SvyaoIRcGYq2BHQTMuqf6hml1zBOr 7JQtnjx5KlFCw== From: Puranjay Mohan To: "David Hildenbrand (Arm)" , linux-kernel@vger.kernel.org Cc: "linux-mm @ kvack . org" , "David Hildenbrand (Arm)" , Andrew Morton , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Jann Horn , Pedro Falcato , David Rientjes , Shakeel Butt , "Matthew Wilcox (Oracle)" , Alice Ryhl , Madhavan Srinivasan , Michael Ellerman , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Jarkko Sakkinen , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Greg Kroah-Hartman , Arve =?utf-8?B?SGrDuG5uZXbDpWc=?= , Todd Kjos , Christian Brauner , Carlos Llamas , Ian Abbott , H Hartley Sweeten , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Simona Vetter , Jason Gunthorpe , Leon Romanovsky , Dimitri Sivanich , Arnd Bergmann , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Peter Zijlstra , Arnaldo Carvalho de Melo , Namhyung Kim , Andy Lutomirski , Vincenzo Frascino , Eric Dumazet , Neal Cardwell , "David S. Miller" , David Ahern , Jakub Kicinski , Paolo Abeni , Miguel Ojeda , linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-fsdevel@vger.kernel.org, netdev@vger.kernel.org, rust-for-linux@vger.kernel.org, x86@kernel.org Subject: Re: [PATCH v1 14/16] mm: rename zap_page_range_single() to zap_vma_range() In-Reply-To: <20260227200848.114019-15-david@kernel.org> References: <20260227200848.114019-1-david@kernel.org> <20260227200848.114019-15-david@kernel.org> Date: Mon, 09 Mar 2026 16:46:09 +0000 Message-ID: MIME-Version: 1.0 Content-Type: text/plain X-Mailman-Approved-At: Mon, 09 Mar 2026 18:56:52 +0000 X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" "David Hildenbrand (Arm)" writes: > Let's rename it to make it better match our new naming scheme. > > While at it, polish the kerneldoc. > > Signed-off-by: David Hildenbrand (Arm) > --- > arch/s390/mm/gmap_helpers.c | 2 +- > drivers/android/binder/page_range.rs | 4 ++-- > drivers/android/binder_alloc.c | 2 +- > include/linux/mm.h | 4 ++-- > kernel/bpf/arena.c | 2 +- > kernel/events/core.c | 2 +- > mm/madvise.c | 4 ++-- > mm/memory.c | 14 +++++++------- > net/ipv4/tcp.c | 6 +++--- > rust/kernel/mm/virt.rs | 4 ++-- > 10 files changed, 22 insertions(+), 22 deletions(-) > > diff --git a/arch/s390/mm/gmap_helpers.c b/arch/s390/mm/gmap_helpers.c > index ae2d59a19313..f8789ffcc05c 100644 > --- a/arch/s390/mm/gmap_helpers.c > +++ b/arch/s390/mm/gmap_helpers.c > @@ -89,7 +89,7 @@ void gmap_helper_discard(struct mm_struct *mm, unsigned long vmaddr, unsigned lo > if (!vma) > return; > if (!is_vm_hugetlb_page(vma)) > - zap_page_range_single(vma, vmaddr, min(end, vma->vm_end) - vmaddr); > + zap_vma_range(vma, vmaddr, min(end, vma->vm_end) - vmaddr); > vmaddr = vma->vm_end; > } > } > diff --git a/drivers/android/binder/page_range.rs b/drivers/android/binder/page_range.rs > index fdd97112ef5c..2fddd4ed8d4c 100644 > --- a/drivers/android/binder/page_range.rs > +++ b/drivers/android/binder/page_range.rs > @@ -130,7 +130,7 @@ pub(crate) struct ShrinkablePageRange { > pid: Pid, > /// The mm for the relevant process. > mm: ARef, > - /// Used to synchronize calls to `vm_insert_page` and `zap_page_range_single`. > + /// Used to synchronize calls to `vm_insert_page` and `zap_vma_range`. > #[pin] > mm_lock: Mutex<()>, > /// Spinlock protecting changes to pages. > @@ -719,7 +719,7 @@ fn drop(self: Pin<&mut Self>) { > > if let Some(vma) = mmap_read.vma_lookup(vma_addr) { > let user_page_addr = vma_addr + (page_index << PAGE_SHIFT); > - vma.zap_page_range_single(user_page_addr, PAGE_SIZE); > + vma.zap_vma_range(user_page_addr, PAGE_SIZE); > } > > drop(mmap_read); > diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c > index dd2046bd5cde..e4488ad86a65 100644 > --- a/drivers/android/binder_alloc.c > +++ b/drivers/android/binder_alloc.c > @@ -1185,7 +1185,7 @@ enum lru_status binder_alloc_free_page(struct list_head *item, > if (vma) { > trace_binder_unmap_user_start(alloc, index); > > - zap_page_range_single(vma, page_addr, PAGE_SIZE); > + zap_vma_range(vma, page_addr, PAGE_SIZE); > > trace_binder_unmap_user_end(alloc, index); > } > diff --git a/include/linux/mm.h b/include/linux/mm.h > index 4bd1500b9630..833bedd3f739 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -2835,7 +2835,7 @@ struct page *vm_normal_page_pud(struct vm_area_struct *vma, unsigned long addr, > > void zap_vma_ptes(struct vm_area_struct *vma, unsigned long address, > unsigned long size); > -void zap_page_range_single(struct vm_area_struct *vma, unsigned long address, > +void zap_vma_range(struct vm_area_struct *vma, unsigned long address, > unsigned long size); > /** > * zap_vma - zap all page table entries in a vma > @@ -2843,7 +2843,7 @@ void zap_page_range_single(struct vm_area_struct *vma, unsigned long address, > */ > static inline void zap_vma(struct vm_area_struct *vma) > { > - zap_page_range_single(vma, vma->vm_start, vma->vm_end - vma->vm_start); > + zap_vma_range(vma, vma->vm_start, vma->vm_end - vma->vm_start); > } > struct mmu_notifier_range; > > diff --git a/kernel/bpf/arena.c b/kernel/bpf/arena.c > index c34510d83b1f..37843c6a4764 100644 > --- a/kernel/bpf/arena.c > +++ b/kernel/bpf/arena.c > @@ -656,7 +656,7 @@ static void zap_pages(struct bpf_arena *arena, long uaddr, long page_cnt) > guard(mutex)(&arena->lock); > /* iterate link list under lock */ > list_for_each_entry(vml, &arena->vma_list, head) > - zap_page_range_single(vml->vma, uaddr, PAGE_SIZE * page_cnt); > + zap_vma_range(vml->vma, uaddr, PAGE_SIZE * page_cnt); > } Acked-by: Puranjay Mohan