From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C323C26AC3 for ; Fri, 27 Mar 2026 23:13:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774653183; cv=none; b=gSGkLLYD/XQNGsTs29+Z44cYtyv5B+8R6rXmkYSOHT140mSpA74yF5paAPZi+hgomEaLtTzoxHo7BEPplThvryuJN59QEij55VGwAmCEAKyKwgG3GCuMX1Y4fOk3yvO1Uyxyfb6Te1xHzWdkpLYNlwVW1kg9hPnajrlVZjtJekI= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774653183; c=relaxed/simple; bh=s8kv7k6LTW0hRje34mP8dzRRFdp03DXc4l1Hwef0noo=; h=Date:To:From:Subject:Message-Id; b=qf4/vGgKr7M4o7g8+Cimk9hdlk2HcLGEJjzLmSAkno+DU+mvV+25lQSjFqv1OMDYD5PMnsr5PQje8G8PlIUcB4TRQeR/uYJ5TNilhmX9fKrFjWUA2vybfckh93TPCH88aCs4NWx0HyNV87FnzICD1p7y8bhD48lvLJA2rH3aBhY= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b=uX9eBl0i; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b="uX9eBl0i" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 98F1DC19423; Fri, 27 Mar 2026 23:13:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1774653183; bh=s8kv7k6LTW0hRje34mP8dzRRFdp03DXc4l1Hwef0noo=; h=Date:To:From:Subject:From; b=uX9eBl0iIMOQce6kPjJmGuwyXGKypw4FeO3+vkbAZb9IiBOpgSfvthR882f9YPNWj vj3Iw37PCtfAoxv58o4Y3lsz15UZxCz8hesPIUB/N2EcBdN2VpCeqm4cTn0olSGUNd auiJqZLAJMMOIqjit9Is9k7HC6UhqInzufqNrxSQ= Date: Fri, 27 Mar 2026 16:13:02 -0700 To: mm-commits@vger.kernel.org,surenb@google.com,akpm@linux-foundation.org From: Andrew Morton Subject: + mm-use-vma_start_write_killable-in-process_vma_walk_lock.patch added to mm-new branch Message-Id: <20260327231303.98F1DC19423@smtp.kernel.org> Precedence: bulk X-Mailing-List: mm-commits@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: The patch titled Subject: mm: use vma_start_write_killable() in process_vma_walk_lock() has been added to the -mm mm-new branch. Its filename is mm-use-vma_start_write_killable-in-process_vma_walk_lock.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-use-vma_start_write_killable-in-process_vma_walk_lock.patch This patch will later appear in the mm-new branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Note, mm-new is a provisional staging ground for work-in-progress patches, and acceptance into mm-new is a notification for others take notice and to finish up reviews. Please do not hesitate to respond to review feedback and post updated versions to replace or incrementally fixup patches in mm-new. The mm-new branch of mm.git is not included in linux-next If a few days of testing in mm-new is successful, the patch will me moved into mm.git's mm-unstable branch, which is included in linux-next Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via various branches at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there most days ------------------------------------------------------ From: Suren Baghdasaryan Subject: mm: use vma_start_write_killable() in process_vma_walk_lock() Date: Fri, 27 Mar 2026 13:54:56 -0700 Replace vma_start_write() with vma_start_write_killable() when process_vma_walk_lock() is used with PGWALK_WRLOCK option. Adjust its direct and indirect users to check for a possible error and handle it. Ensure users handle EINTR correctly and do not ignore it. When queue_pages_range() fails, check whether it failed due to a fatal signal or some other reason and return appropriate error. Link: https://lkml.kernel.org/r/20260327205457.604224-6-surenb@google.com Signed-off-by: Suren Baghdasaryan Suggested-by: Matthew Wilcox Cc: Alexander Gordeev Cc: Alistair Popple Cc: Baolin Wang Cc: Barry Song Cc: Byungchul Park Cc: Christian Borntraeger Cc: Claudio Imbrenda Cc: David Hildenbrand Cc: Dev Jain Cc: Gerald Schaefer Cc: Gregory Price Cc: Heiko Carstens Cc: "Huang, Ying" Cc: Jann Horn Cc: Janosch Frank Cc: Joshua Hahn Cc: Kees Cook Cc: Lance Yang Cc: Liam R. Howlett Cc: Lorenzo Stoakes Cc: Lorenzo Stoakes (Oracle) Cc: Madhavan Srinivasan Cc: Matthew Brost Cc: Michael Ellerman Cc: Michal Hocko Cc: Mike Rapoport Cc: Nicholas Piggin Cc: Nico Pache Cc: Pedro Falcato Cc: Rakie Kim Cc: Ritesh Harjani (IBM) Cc: Ryan Roberts Cc: Sven Schnelle Cc: Vasily Gorbik Cc: Vlastimil Babka Cc: Zi Yan Signed-off-by: Andrew Morton --- fs/proc/task_mmu.c | 12 ++++++------ mm/mempolicy.c | 10 +++++++++- mm/pagewalk.c | 22 +++++++++++++++------- 3 files changed, 30 insertions(+), 14 deletions(-) --- a/fs/proc/task_mmu.c~mm-use-vma_start_write_killable-in-process_vma_walk_lock +++ a/fs/proc/task_mmu.c @@ -1774,15 +1774,15 @@ static ssize_t clear_refs_write(struct f struct vm_area_struct *vma; enum clear_refs_types type; int itype; - int rv; + int err; if (count > sizeof(buffer) - 1) count = sizeof(buffer) - 1; if (copy_from_user(buffer, buf, count)) return -EFAULT; - rv = kstrtoint(strstrip(buffer), 10, &itype); - if (rv < 0) - return rv; + err = kstrtoint(strstrip(buffer), 10, &itype); + if (err) + return err; type = (enum clear_refs_types)itype; if (type < CLEAR_REFS_ALL || type >= CLEAR_REFS_LAST) return -EINVAL; @@ -1824,7 +1824,7 @@ static ssize_t clear_refs_write(struct f 0, mm, 0, -1UL); mmu_notifier_invalidate_range_start(&range); } - walk_page_range(mm, 0, -1, &clear_refs_walk_ops, &cp); + err = walk_page_range(mm, 0, -1, &clear_refs_walk_ops, &cp); if (type == CLEAR_REFS_SOFT_DIRTY) { mmu_notifier_invalidate_range_end(&range); flush_tlb_mm(mm); @@ -1837,7 +1837,7 @@ out_mm: } put_task_struct(task); - return count; + return err ? : count; } const struct file_operations proc_clear_refs_operations = { --- a/mm/mempolicy.c~mm-use-vma_start_write_killable-in-process_vma_walk_lock +++ a/mm/mempolicy.c @@ -969,6 +969,7 @@ static const struct mm_walk_ops queue_pa * (a hugetlbfs page or a transparent huge page being counted as 1). * -EIO - a misplaced page found, when MPOL_MF_STRICT specified without MOVEs. * -EFAULT - a hole in the memory range, when MPOL_MF_DISCONTIG_OK unspecified. + * -EINTR - walk got terminated due to pending fatal signal. */ static long queue_pages_range(struct mm_struct *mm, unsigned long start, unsigned long end, @@ -1545,7 +1546,14 @@ static long do_mbind(unsigned long start flags | MPOL_MF_INVERT | MPOL_MF_WRLOCK, &pagelist); if (nr_failed < 0) { - err = nr_failed; + /* + * queue_pages_range() might override the original error with -EFAULT. + * Confirm that fatal signals are still treated correctly. + */ + if (fatal_signal_pending(current)) + err = -EINTR; + else + err = nr_failed; nr_failed = 0; } else { vma_iter_init(&vmi, mm, start); --- a/mm/pagewalk.c~mm-use-vma_start_write_killable-in-process_vma_walk_lock +++ a/mm/pagewalk.c @@ -443,14 +443,13 @@ static inline void process_mm_walk_lock( mmap_assert_write_locked(mm); } -static inline void process_vma_walk_lock(struct vm_area_struct *vma, - enum page_walk_lock walk_lock) +static int process_vma_walk_lock(struct vm_area_struct *vma, + enum page_walk_lock walk_lock) { #ifdef CONFIG_PER_VMA_LOCK switch (walk_lock) { case PGWALK_WRLOCK: - vma_start_write(vma); - break; + return vma_start_write_killable(vma); case PGWALK_WRLOCK_VERIFY: vma_assert_write_locked(vma); break; @@ -462,6 +461,7 @@ static inline void process_vma_walk_lock break; } #endif + return 0; } /* @@ -505,7 +505,9 @@ int walk_page_range_mm_unsafe(struct mm_ if (ops->pte_hole) err = ops->pte_hole(start, next, -1, &walk); } else { /* inside vma */ - process_vma_walk_lock(vma, ops->walk_lock); + err = process_vma_walk_lock(vma, ops->walk_lock); + if (err) + break; walk.vma = vma; next = min(end, vma->vm_end); vma = find_vma(mm, vma->vm_end); @@ -722,6 +724,7 @@ int walk_page_range_vma_unsafe(struct vm .vma = vma, .private = private, }; + int err; if (start >= end || !walk.mm) return -EINVAL; @@ -729,7 +732,9 @@ int walk_page_range_vma_unsafe(struct vm return -EINVAL; process_mm_walk_lock(walk.mm, ops->walk_lock); - process_vma_walk_lock(vma, ops->walk_lock); + err = process_vma_walk_lock(vma, ops->walk_lock); + if (err) + return err; return __walk_page_range(start, end, &walk); } @@ -752,6 +757,7 @@ int walk_page_vma(struct vm_area_struct .vma = vma, .private = private, }; + int err; if (!walk.mm) return -EINVAL; @@ -759,7 +765,9 @@ int walk_page_vma(struct vm_area_struct return -EINVAL; process_mm_walk_lock(walk.mm, ops->walk_lock); - process_vma_walk_lock(vma, ops->walk_lock); + err = process_vma_walk_lock(vma, ops->walk_lock); + if (err) + return err; return __walk_page_range(vma->vm_start, vma->vm_end, &walk); } _ Patches currently in -mm which might be from surenb@google.com are mm-vma-cleanup-error-handling-path-in-vma_expand.patch mm-use-vma_start_write_killable-in-mm-syscalls.patch mm-khugepaged-use-vma_start_write_killable-in-collapse_huge_page.patch mm-vma-use-vma_start_write_killable-in-vma-operations.patch mm-use-vma_start_write_killable-in-process_vma_walk_lock.patch kvm-ppc-use-vma_start_write_killable-in-kvmppc_memslot_page_merge.patch mm-vmscan-prevent-mglru-reclaim-from-pinning-address-space.patch