From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 26D5D361DAC for ; Thu, 2 Apr 2026 04:36:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775104604; cv=none; b=LC66nOWAg51CSw4utAUWGFoFOipkx1bWV42zE3+D9k5vERsbR3ntHiscAg7++nUMsUI1kq9k6AOhIwKkchIEDV8I58PAH30+n+mOC/6Rbx0wHpVciafWb9C72fP4W3LCJolmY8r9uD+U9wzV4aN7jyJVFvEWQjdIGor0hjK+C8w= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775104604; c=relaxed/simple; bh=kA04QaKcZUVutWu1KlUk2ruvEhVtdnQ/Aw3NHaFukgE=; h=Date:To:From:Subject:Message-Id; b=VZSrcqDUlTcwuz7/hY3xWHYMPCMGfF8CwC4tE77s27W8j9ZKF41CMz9L0sFMT9mSPXA4nCW9iQjk5FCDxbizlbWM09PFQpver82dc0W5aAcvtbez/hrxEoBuCDS/h02Uel95uA5iFhaccNTd4CN52cC6WwWkedudL7uTx9nOUHE= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b=1xIzb1nO; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b="1xIzb1nO" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BF707C19423; Thu, 2 Apr 2026 04:36:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1775104603; bh=kA04QaKcZUVutWu1KlUk2ruvEhVtdnQ/Aw3NHaFukgE=; h=Date:To:From:Subject:From; b=1xIzb1nObVeXTBV5Lj+m/FEM4AtzQ5Cp4V0H1jbnhTwNxo6zB0QtGKISv2iD3O9+l hIMlb8sNew1Iay3/302riL6ySq3eLVslvK9Abv/ghIz1dh/u3ZPLErKLTXdryU2Ihp RG4NQ6kEKMnChapc74JTT+MtHxi5bvsvP01X7K4o= Date: Wed, 01 Apr 2026 21:36:43 -0700 To: mm-commits@vger.kernel.org,rppt@kernel.org,akpm@linux-foundation.org From: Andrew Morton Subject: + userfaultfd-introduce-mfill_get_vma-and-mfill_put_vma.patch added to mm-unstable branch Message-Id: <20260402043643.BF707C19423@smtp.kernel.org> Precedence: bulk X-Mailing-List: mm-commits@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: The patch titled Subject: userfaultfd: introduce mfill_get_vma() and mfill_put_vma() has been added to the -mm mm-unstable branch. Its filename is userfaultfd-introduce-mfill_get_vma-and-mfill_put_vma.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/userfaultfd-introduce-mfill_get_vma-and-mfill_put_vma.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via various branches at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there most days ------------------------------------------------------ From: "Mike Rapoport (Microsoft)" Subject: userfaultfd: introduce mfill_get_vma() and mfill_put_vma() Date: Thu, 2 Apr 2026 07:11:45 +0300 Split the code that finds, locks and verifies VMA from mfill_atomic() into a helper function. This function will be used later during refactoring of mfill_atomic_pte_copy(). Add a counterpart mfill_put_vma() helper that unlocks the VMA and releases map_changing_lock. [avagin@google.com: fix lock leak in mfill_get_vma()] Link: https://lkml.kernel.org/r/20260316173829.1126728-1-avagin@google.com Link: https://lkml.kernel.org/r/20260402041156.1377214-5-rppt@kernel.org Signed-off-by: Mike Rapoport (Microsoft) Signed-off-by: Andrei Vagin Cc: Andrea Arcangeli Cc: Axel Rasmussen Cc: Baolin Wang Cc: David Hildenbrand (Arm) Cc: Harry Yoo Cc: Harry Yoo (Oracle) Cc: Hugh Dickins Cc: James Houghton Cc: Liam Howlett Cc: Lorenzo Stoakes (Oracle) Cc: Matthew Wilcox (Oracle) Cc: Michal Hocko Cc: Muchun Song Cc: Nikita Kalyazin Cc: Oscar Salvador Cc: Paolo Bonzini Cc: Peter Xu Cc: Sean Christopherson Cc: Shuah Khan Cc: Suren Baghdasaryan Cc: Vlastimil Babka Signed-off-by: Andrew Morton --- mm/userfaultfd.c | 125 +++++++++++++++++++++++++++------------------ 1 file changed, 75 insertions(+), 50 deletions(-) --- a/mm/userfaultfd.c~userfaultfd-introduce-mfill_get_vma-and-mfill_put_vma +++ a/mm/userfaultfd.c @@ -157,6 +157,75 @@ static void uffd_mfill_unlock(struct vm_ } #endif +static void mfill_put_vma(struct mfill_state *state) +{ + if (!state->vma) + return; + + up_read(&state->ctx->map_changing_lock); + uffd_mfill_unlock(state->vma); + state->vma = NULL; +} + +static int mfill_get_vma(struct mfill_state *state) +{ + struct userfaultfd_ctx *ctx = state->ctx; + uffd_flags_t flags = state->flags; + struct vm_area_struct *dst_vma; + int err; + + /* + * Make sure the vma is not shared, that the dst range is + * both valid and fully within a single existing vma. + */ + dst_vma = uffd_mfill_lock(ctx->mm, state->dst_start, state->len); + if (IS_ERR(dst_vma)) + return PTR_ERR(dst_vma); + + /* + * If memory mappings are changing because of non-cooperative + * operation (e.g. mremap) running in parallel, bail out and + * request the user to retry later + */ + down_read(&ctx->map_changing_lock); + state->vma = dst_vma; + err = -EAGAIN; + if (atomic_read(&ctx->mmap_changing)) + goto out_unlock; + + err = -EINVAL; + + /* + * shmem_zero_setup is invoked in mmap for MAP_ANONYMOUS|MAP_SHARED but + * it will overwrite vm_ops, so vma_is_anonymous must return false. + */ + if (WARN_ON_ONCE(vma_is_anonymous(dst_vma) && + dst_vma->vm_flags & VM_SHARED)) + goto out_unlock; + + /* + * validate 'mode' now that we know the dst_vma: don't allow + * a wrprotect copy if the userfaultfd didn't register as WP. + */ + if ((flags & MFILL_ATOMIC_WP) && !(dst_vma->vm_flags & VM_UFFD_WP)) + goto out_unlock; + + if (is_vm_hugetlb_page(dst_vma)) + return 0; + + if (!vma_is_anonymous(dst_vma) && !vma_is_shmem(dst_vma)) + goto out_unlock; + if (!vma_is_shmem(dst_vma) && + uffd_flags_mode_is(flags, MFILL_ATOMIC_CONTINUE)) + goto out_unlock; + + return 0; + +out_unlock: + mfill_put_vma(state); + return err; +} + static pmd_t *mm_alloc_pmd(struct mm_struct *mm, unsigned long address) { pgd_t *pgd; @@ -767,8 +836,6 @@ static __always_inline ssize_t mfill_ato .src_addr = src_start, .dst_addr = dst_start, }; - struct mm_struct *dst_mm = ctx->mm; - struct vm_area_struct *dst_vma; long copied = 0; ssize_t err; @@ -783,56 +850,17 @@ static __always_inline ssize_t mfill_ato VM_WARN_ON_ONCE(dst_start + len <= dst_start); retry: - /* - * Make sure the vma is not shared, that the dst range is - * both valid and fully within a single existing vma. - */ - dst_vma = uffd_mfill_lock(dst_mm, dst_start, len); - if (IS_ERR(dst_vma)) { - err = PTR_ERR(dst_vma); + err = mfill_get_vma(&state); + if (err) goto out; - } - state.vma = dst_vma; - - /* - * If memory mappings are changing because of non-cooperative - * operation (e.g. mremap) running in parallel, bail out and - * request the user to retry later - */ - down_read(&ctx->map_changing_lock); - err = -EAGAIN; - if (atomic_read(&ctx->mmap_changing)) - goto out_unlock; - - err = -EINVAL; - /* - * shmem_zero_setup is invoked in mmap for MAP_ANONYMOUS|MAP_SHARED but - * it will overwrite vm_ops, so vma_is_anonymous must return false. - */ - if (WARN_ON_ONCE(vma_is_anonymous(dst_vma) && - dst_vma->vm_flags & VM_SHARED)) - goto out_unlock; - - /* - * validate 'mode' now that we know the dst_vma: don't allow - * a wrprotect copy if the userfaultfd didn't register as WP. - */ - if ((flags & MFILL_ATOMIC_WP) && !(dst_vma->vm_flags & VM_UFFD_WP)) - goto out_unlock; /* * If this is a HUGETLB vma, pass off to appropriate routine */ - if (is_vm_hugetlb_page(dst_vma)) - return mfill_atomic_hugetlb(ctx, dst_vma, dst_start, + if (is_vm_hugetlb_page(state.vma)) + return mfill_atomic_hugetlb(ctx, state.vma, dst_start, src_start, len, flags); - if (!vma_is_anonymous(dst_vma) && !vma_is_shmem(dst_vma)) - goto out_unlock; - if (!vma_is_shmem(dst_vma) && - uffd_flags_mode_is(flags, MFILL_ATOMIC_CONTINUE)) - goto out_unlock; - while (state.src_addr < src_start + len) { VM_WARN_ON_ONCE(state.dst_addr >= dst_start + len); @@ -851,8 +879,7 @@ retry: if (unlikely(err == -ENOENT)) { void *kaddr; - up_read(&ctx->map_changing_lock); - uffd_mfill_unlock(state.vma); + mfill_put_vma(&state); VM_WARN_ON_ONCE(!state.folio); kaddr = kmap_local_folio(state.folio, 0); @@ -881,9 +908,7 @@ retry: break; } -out_unlock: - up_read(&ctx->map_changing_lock); - uffd_mfill_unlock(state.vma); + mfill_put_vma(&state); out: if (state.folio) folio_put(state.folio); _ Patches currently in -mm which might be from rppt@kernel.org are userfaultfd-introduce-mfill_copy_folio_locked-helper.patch userfaultfd-introduce-struct-mfill_state.patch userfaultfd-introduce-mfill_establish_pmd-helper.patch userfaultfd-introduce-mfill_get_vma-and-mfill_put_vma.patch userfaultfd-retry-copying-with-locks-dropped-in-mfill_atomic_pte_copy.patch userfaultfd-move-vma_can_userfault-out-of-line.patch userfaultfd-introduce-vm_uffd_ops.patch shmem-userfaultfd-use-a-vma-callback-to-handle-uffdio_continue.patch userfaultfd-introduce-vm_uffd_ops-alloc_folio.patch shmem-userfaultfd-implement-shmem-uffd-operations-using-vm_uffd_ops.patch userfaultfd-mfill_atomic-remove-retry-logic.patch