From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 571442BD012; Wed, 4 Feb 2026 15:03:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770217380; cv=none; b=tElrR2tiqWbWm/oR3cMUGHTJ1XAZMBUU/i5wYznzm38gv8da8xOfo+aRwRj0awb24V06XjBn3Agd3umsEscc+R9mOIu17JK9PmylGi12jlB3FUaRKNupNI6pKVG8ckZJQmEenOR/k+d5JjW2kZNP5vj3ucStAn0Do6wqS0eGao0= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770217380; c=relaxed/simple; bh=eHekBpUMHq2C12u3S6HYIDQNPKIGXh5MvSfBikR3S2E=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=KfiG/2U+J36be4avRxKpEDLR3d0fWlwh3XH+1rn43k2e3ySdqGJQ/6Cie6qelDws5M8+AjoaGK5wBrDC+UYdzSkEc2G2LBPe02/FqTXDmr+Vl9cR9TffGKqwZPQYwo9RZN1siWDMxUM2xXuh4VWM/GKd8UflSqYM2l5Q2SFxnmk= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b=2MJHCXhu; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="2MJHCXhu" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A43FCC4CEF7; Wed, 4 Feb 2026 15:02:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1770217380; bh=eHekBpUMHq2C12u3S6HYIDQNPKIGXh5MvSfBikR3S2E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=2MJHCXhuPl1wQx/o6nPI4q8xhvA6wXgKolKNF7uLkQMRpMGaOD3TVOgrtj6h2Qs+h RA0HFea6szHH8R80y52ajZgBWx5M4yJ/UeBoJOJNwsWrQVYf2+SDDp/181ICcHO/Oc HtPHL3CdP6j22Jie9ZsiNBBl4axjqPJp1Xa7S9bg= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, David Hildenbrand , Andrea Arcangeli , Hugh Dickins , Jason Gunthorpe , John Hubbard , "Matthew Wilcox (Oracle)" , Peter Xu , Shuah Khan , Vlastimil Babka , Andrew Morton , Pedro Demarchi Gomes Subject: [PATCH 5.15 173/206] mm/pagewalk: add walk_page_range_vma() Date: Wed, 4 Feb 2026 15:40:04 +0100 Message-ID: <20260204143904.441025908@linuxfoundation.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260204143858.193781818@linuxfoundation.org> References: <20260204143858.193781818@linuxfoundation.org> User-Agent: quilt/0.69 X-stable: review X-Patchwork-Hint: ignore Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit 5.15-stable review patch. If anyone has any objections, please let me know. ------------------ From: David Hildenbrand [ Upstream commit e07cda5f232fac4de0925d8a4c92e51e41fa2f6e ] Let's add walk_page_range_vma(), which is similar to walk_page_vma(), however, is only interested in a subset of the VMA range. To be used in KSM code to stop using follow_page() next. Link: https://lkml.kernel.org/r/20221021101141.84170-8-david@redhat.com Signed-off-by: David Hildenbrand Cc: Andrea Arcangeli Cc: Hugh Dickins Cc: Jason Gunthorpe Cc: John Hubbard Cc: Matthew Wilcox (Oracle) Cc: Peter Xu Cc: Shuah Khan Cc: Vlastimil Babka Signed-off-by: Andrew Morton Stable-dep-of: f5548c318d6 ("ksm: use range-walk function to jump over holes in scan_get_next_rmap_item") Signed-off-by: Pedro Demarchi Gomes Signed-off-by: Greg Kroah-Hartman --- include/linux/pagewalk.h | 3 +++ mm/pagewalk.c | 20 ++++++++++++++++++++ 2 files changed, 23 insertions(+) --- a/include/linux/pagewalk.h +++ b/include/linux/pagewalk.h @@ -99,6 +99,9 @@ int walk_page_range_novma(struct mm_stru unsigned long end, const struct mm_walk_ops *ops, pgd_t *pgd, void *private); +int walk_page_range_vma(struct vm_area_struct *vma, unsigned long start, + unsigned long end, const struct mm_walk_ops *ops, + void *private); int walk_page_vma(struct vm_area_struct *vma, const struct mm_walk_ops *ops, void *private); int walk_page_mapping(struct address_space *mapping, pgoff_t first_index, --- a/mm/pagewalk.c +++ b/mm/pagewalk.c @@ -509,6 +509,26 @@ int walk_page_range_novma(struct mm_stru return walk_pgd_range(start, end, &walk); } +int walk_page_range_vma(struct vm_area_struct *vma, unsigned long start, + unsigned long end, const struct mm_walk_ops *ops, + void *private) +{ + struct mm_walk walk = { + .ops = ops, + .mm = vma->vm_mm, + .vma = vma, + .private = private, + }; + + if (start >= end || !walk.mm) + return -EINVAL; + if (start < vma->vm_start || end > vma->vm_end) + return -EINVAL; + + mmap_assert_locked(walk.mm); + return __walk_page_range(start, end, &walk); +} + int walk_page_vma(struct vm_area_struct *vma, const struct mm_walk_ops *ops, void *private) {