From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6B06B3D3331; Thu, 15 Jan 2026 17:58:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768499918; cv=none; b=qcSabsICxHa1BeN5h0BtenKB/PmQBYRM+7PVSY919End82ECcA2DU/xgUXLa4OrzvQCS5a0rUnQTK854zoqdEnfJ37T6KP2t+rGV5/8TetGC5DKKwhJJv1HnHClEuJxH3tO5OJbJuDPfio2uaDKPmMHBw/8Hc2GLIKTfG6dSi4U= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768499918; c=relaxed/simple; bh=jtOTPSNM0uO9+H3I1Wiet7LPp478NS1EL9wjnUnyBIc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=pX5VcKxqnxZDzmQS3Ldk0Tg4PRTUyWMmukNUOuvTY03bKx7p6ILX73tRWijF2aJdOd9HBn395kUACQuK+wMvoyTyGKx9BAhcPTCvnGuOkvGhm3gtHFPjesGUZwsKB+l4gM/2iVTDXxFnccKoiM7B/+VljbJbRhC+PybsHof1Fpo= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b=yX/StUfy; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="yX/StUfy" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BEB2BC116D0; Thu, 15 Jan 2026 17:58:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1768499918; bh=jtOTPSNM0uO9+H3I1Wiet7LPp478NS1EL9wjnUnyBIc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=yX/StUfyCvmOz4McmjWeDrn+KBMuaitCXoiIQN4NbsArqVkkFNV6BHe/zhxdQYnqi auU1m4kPWO0mhsyXxqc66Xls11LT2sYHzSFEKRT8rn1XJctltO1d66jfSeanrI0n+r vJE5z/8Z4t0UhbU9SvidzF1bKEcnLUc2hVIzCLdU= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, Peter Xu , Mike Kravetz , James Houghton , Andrea Arcangeli , Axel Rasmussen , David Hildenbrand , Muchun Song , Nadav Amit , Andrew Morton , Harry Yoo , "David Hildenbrand (Red Hat)" Subject: [PATCH 5.10 394/451] mm/mprotect: use long for page accountings and retval Date: Thu, 15 Jan 2026 17:49:55 +0100 Message-ID: <20260115164245.186677511@linuxfoundation.org> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260115164230.864985076@linuxfoundation.org> References: <20260115164230.864985076@linuxfoundation.org> User-Agent: quilt/0.69 X-stable: review X-Patchwork-Hint: ignore Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit 5.10-stable review patch. If anyone has any objections, please let me know. ------------------ From: Peter Xu commit a79390f5d6a78647fd70856bd42b22d994de0ba2 upstream. Switch to use type "long" for page accountings and retval across the whole procedure of change_protection(). The change should have shrinked the possible maximum page number to be half comparing to previous (ULONG_MAX / 2), but it shouldn't overflow on any system either because the maximum possible pages touched by change protection should be ULONG_MAX / PAGE_SIZE. Two reasons to switch from "unsigned long" to "long": 1. It suites better on count_vm_numa_events(), whose 2nd parameter takes a long type. 2. It paves way for returning negative (error) values in the future. Currently the only caller that consumes this retval is change_prot_numa(), where the unsigned long was converted to an int. Since at it, touching up the numa code to also take a long, so it'll avoid any possible overflow too during the int-size convertion. Link: https://lkml.kernel.org/r/20230104225207.1066932-3-peterx@redhat.com Signed-off-by: Peter Xu Acked-by: Mike Kravetz Acked-by: James Houghton Cc: Andrea Arcangeli Cc: Axel Rasmussen Cc: David Hildenbrand Cc: Muchun Song Cc: Nadav Amit Signed-off-by: Andrew Morton [ Adjust context ] Signed-off-by: Harry Yoo Acked-by: David Hildenbrand (Red Hat) Signed-off-by: Greg Kroah-Hartman --- include/linux/hugetlb.h | 4 ++-- include/linux/mm.h | 2 +- mm/hugetlb.c | 4 ++-- mm/mempolicy.c | 2 +- mm/mprotect.c | 34 +++++++++++++++++----------------- 5 files changed, 23 insertions(+), 23 deletions(-) --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -184,7 +184,7 @@ struct page *follow_huge_pgd(struct mm_s int pmd_huge(pmd_t pmd); int pud_huge(pud_t pud); -unsigned long hugetlb_change_protection(struct vm_area_struct *vma, +long hugetlb_change_protection(struct vm_area_struct *vma, unsigned long address, unsigned long end, pgprot_t newprot); bool is_hugetlb_entry_migration(pte_t pte); @@ -342,7 +342,7 @@ static inline void move_hugetlb_state(st { } -static inline unsigned long hugetlb_change_protection( +static inline long hugetlb_change_protection( struct vm_area_struct *vma, unsigned long address, unsigned long end, pgprot_t newprot) { --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1876,7 +1876,7 @@ extern unsigned long move_page_tables(st #define MM_CP_UFFD_WP_ALL (MM_CP_UFFD_WP | \ MM_CP_UFFD_WP_RESOLVE) -extern unsigned long change_protection(struct vm_area_struct *vma, unsigned long start, +extern long change_protection(struct vm_area_struct *vma, unsigned long start, unsigned long end, pgprot_t newprot, unsigned long cp_flags); extern int mprotect_fixup(struct vm_area_struct *vma, --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5051,7 +5051,7 @@ same_page: #define flush_hugetlb_tlb_range(vma, addr, end) flush_tlb_range(vma, addr, end) #endif -unsigned long hugetlb_change_protection(struct vm_area_struct *vma, +long hugetlb_change_protection(struct vm_area_struct *vma, unsigned long address, unsigned long end, pgprot_t newprot) { struct mm_struct *mm = vma->vm_mm; @@ -5059,7 +5059,7 @@ unsigned long hugetlb_change_protection( pte_t *ptep; pte_t pte; struct hstate *h = hstate_vma(vma); - unsigned long pages = 0; + long pages = 0; bool shared_pmd = false; struct mmu_notifier_range range; --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -653,7 +653,7 @@ unlock: unsigned long change_prot_numa(struct vm_area_struct *vma, unsigned long addr, unsigned long end) { - int nr_updated; + long nr_updated; nr_updated = change_protection(vma, addr, end, PAGE_NONE, MM_CP_PROT_NUMA); if (nr_updated) --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -35,13 +35,13 @@ #include "internal.h" -static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, +static long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, unsigned long end, pgprot_t newprot, unsigned long cp_flags) { pte_t *pte, oldpte; spinlock_t *ptl; - unsigned long pages = 0; + long pages = 0; int target_node = NUMA_NO_NODE; bool dirty_accountable = cp_flags & MM_CP_DIRTY_ACCT; bool prot_numa = cp_flags & MM_CP_PROT_NUMA; @@ -209,13 +209,13 @@ static inline int pmd_none_or_clear_bad_ return 0; } -static inline unsigned long change_pmd_range(struct vm_area_struct *vma, +static inline long change_pmd_range(struct vm_area_struct *vma, pud_t *pud, unsigned long addr, unsigned long end, pgprot_t newprot, unsigned long cp_flags) { pmd_t *pmd; unsigned long next; - unsigned long pages = 0; + long pages = 0; unsigned long nr_huge_updates = 0; struct mmu_notifier_range range; @@ -223,7 +223,7 @@ static inline unsigned long change_pmd_r pmd = pmd_offset(pud, addr); do { - unsigned long this_pages; + long this_pages; next = pmd_addr_end(addr, end); @@ -281,13 +281,13 @@ next: return pages; } -static inline unsigned long change_pud_range(struct vm_area_struct *vma, - p4d_t *p4d, unsigned long addr, unsigned long end, - pgprot_t newprot, unsigned long cp_flags) +static inline long change_pud_range(struct vm_area_struct *vma, p4d_t *p4d, + unsigned long addr, unsigned long end, pgprot_t newprot, + unsigned long cp_flags) { pud_t *pud; unsigned long next; - unsigned long pages = 0; + long pages = 0; pud = pud_offset(p4d, addr); do { @@ -301,13 +301,13 @@ static inline unsigned long change_pud_r return pages; } -static inline unsigned long change_p4d_range(struct vm_area_struct *vma, - pgd_t *pgd, unsigned long addr, unsigned long end, - pgprot_t newprot, unsigned long cp_flags) +static inline long change_p4d_range(struct vm_area_struct *vma, pgd_t *pgd, + unsigned long addr, unsigned long end, pgprot_t newprot, + unsigned long cp_flags) { p4d_t *p4d; unsigned long next; - unsigned long pages = 0; + long pages = 0; p4d = p4d_offset(pgd, addr); do { @@ -321,7 +321,7 @@ static inline unsigned long change_p4d_r return pages; } -static unsigned long change_protection_range(struct vm_area_struct *vma, +static long change_protection_range(struct vm_area_struct *vma, unsigned long addr, unsigned long end, pgprot_t newprot, unsigned long cp_flags) { @@ -329,7 +329,7 @@ static unsigned long change_protection_r pgd_t *pgd; unsigned long next; unsigned long start = addr; - unsigned long pages = 0; + long pages = 0; BUG_ON(addr >= end); pgd = pgd_offset(mm, addr); @@ -351,11 +351,11 @@ static unsigned long change_protection_r return pages; } -unsigned long change_protection(struct vm_area_struct *vma, unsigned long start, +long change_protection(struct vm_area_struct *vma, unsigned long start, unsigned long end, pgprot_t newprot, unsigned long cp_flags) { - unsigned long pages; + long pages; BUG_ON((cp_flags & MM_CP_UFFD_WP_ALL) == MM_CP_UFFD_WP_ALL);