From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 873C1361DAC for ; Thu, 2 Apr 2026 04:37:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775104621; cv=none; b=jxuaDJTdqoSBObgrNP+UGxlPKiNIAvosZV38LlASLIW8ilZTSgV/gNLE0fn3Ay2XIZDkcoMchyy/YcldKITNwTZ+WcEye6NwSkywlbSQ3D5K6khTK6YvgQbcugSOYpwutP7cD93K/6iGm1noZDXOazKBk7XDUA12rPp18sr3oUg= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775104621; c=relaxed/simple; bh=bK1ZUP8UMFChDY6oc9Lg6TInQ98vxn3ZcBqhXbX/njE=; h=Date:To:From:Subject:Message-Id; b=QPw0udfbVkLZCv2mRSI2kdwV07AviRXSFQaR3/9uT6ZG2Jtd8zaLgJOO0n5AgNSoL4Ky1mc9TM5hBKNque3epHy71c1rUXjAT96RnSHBUAwJAEbKNH/rokYCpTtGsW6RzOPtGLAD5axz3XTGQPgWep9GDZEKeXp2ER4h5YU4hjM= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b=2oRcSIJG; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b="2oRcSIJG" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5F5FAC19423; Thu, 2 Apr 2026 04:37:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1775104621; bh=bK1ZUP8UMFChDY6oc9Lg6TInQ98vxn3ZcBqhXbX/njE=; h=Date:To:From:Subject:From; b=2oRcSIJGvheU9K32y5oXBJb2JTXhIf7Y9jaSjnPi9cMHX5Ov/ucz68auYz1uJmIxW L8umGUdnqB6H5L5Zzv3wjZWm0iiYHaMzworSDKzzh50bhNtlrIVjwDD0G64eX3cDJ6 +sfZ4jzIbYJ/LO+8BXsaj38AnCi6tdYGrJ8xRZDQ= Date: Wed, 01 Apr 2026 21:37:00 -0700 To: mm-commits@vger.kernel.org,rppt@kernel.org,akpm@linux-foundation.org From: Andrew Morton Subject: + shmem-userfaultfd-implement-shmem-uffd-operations-using-vm_uffd_ops.patch added to mm-unstable branch Message-Id: <20260402043701.5F5FAC19423@smtp.kernel.org> Precedence: bulk X-Mailing-List: mm-commits@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: The patch titled Subject: shmem, userfaultfd: implement shmem uffd operations using vm_uffd_ops has been added to the -mm mm-unstable branch. Its filename is shmem-userfaultfd-implement-shmem-uffd-operations-using-vm_uffd_ops.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/shmem-userfaultfd-implement-shmem-uffd-operations-using-vm_uffd_ops.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via various branches at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there most days ------------------------------------------------------ From: "Mike Rapoport (Microsoft)" Subject: shmem, userfaultfd: implement shmem uffd operations using vm_uffd_ops Date: Thu, 2 Apr 2026 07:11:51 +0300 Add filemap_add() and filemap_remove() methods to vm_uffd_ops and use them in __mfill_atomic_pte() to add shmem folios to page cache and remove them in case of error. Implement these methods in shmem along with vm_uffd_ops->alloc_folio() and drop shmem_mfill_atomic_pte(). Since userfaultfd now does not reference any functions from shmem, drop include if linux/shmem_fs.h from mm/userfaultfd.c mfill_atomic_install_pte() is not used anywhere outside of mm/userfaultfd, make it static. Link: https://lkml.kernel.org/r/20260402041156.1377214-11-rppt@kernel.org Signed-off-by: Mike Rapoport (Microsoft) Reviewed-by: James Houghton Cc: Andrea Arcangeli Cc: Andrei Vagin Cc: Axel Rasmussen Cc: Baolin Wang Cc: David Hildenbrand (Arm) Cc: Harry Yoo Cc: Harry Yoo (Oracle) Cc: Hugh Dickins Cc: Liam Howlett Cc: Lorenzo Stoakes (Oracle) Cc: Matthew Wilcox (Oracle) Cc: Michal Hocko Cc: Muchun Song Cc: Nikita Kalyazin Cc: Oscar Salvador Cc: Paolo Bonzini Cc: Peter Xu Cc: Sean Christopherson Cc: Shuah Khan Cc: Suren Baghdasaryan Cc: Vlastimil Babka Signed-off-by: Andrew Morton --- include/linux/shmem_fs.h | 14 -- include/linux/userfaultfd_k.h | 19 ++-- mm/shmem.c | 150 +++++++++++--------------------- mm/userfaultfd.c | 80 ++++++++--------- 4 files changed, 107 insertions(+), 156 deletions(-) --- a/include/linux/shmem_fs.h~shmem-userfaultfd-implement-shmem-uffd-operations-using-vm_uffd_ops +++ a/include/linux/shmem_fs.h @@ -221,20 +221,6 @@ static inline pgoff_t shmem_fallocend(st extern bool shmem_charge(struct inode *inode, long pages); -#ifdef CONFIG_USERFAULTFD -#ifdef CONFIG_SHMEM -extern int shmem_mfill_atomic_pte(pmd_t *dst_pmd, - struct vm_area_struct *dst_vma, - unsigned long dst_addr, - unsigned long src_addr, - uffd_flags_t flags, - struct folio **foliop); -#else /* !CONFIG_SHMEM */ -#define shmem_mfill_atomic_pte(dst_pmd, dst_vma, dst_addr, \ - src_addr, flags, foliop) ({ BUG(); 0; }) -#endif /* CONFIG_SHMEM */ -#endif /* CONFIG_USERFAULTFD */ - /* * Used space is stored as unsigned 64-bit value in bytes but * quota core supports only signed 64-bit values so use that --- a/include/linux/userfaultfd_k.h~shmem-userfaultfd-implement-shmem-uffd-operations-using-vm_uffd_ops +++ a/include/linux/userfaultfd_k.h @@ -100,6 +100,20 @@ struct vm_uffd_ops { */ struct folio *(*alloc_folio)(struct vm_area_struct *vma, unsigned long addr); + /* + * Called during resolution of UFFDIO_COPY request. + * Should only be called with a folio returned by alloc_folio() above. + * The folio will be set to locked. + * Returns 0 on success, error code on failure. + */ + int (*filemap_add)(struct folio *folio, struct vm_area_struct *vma, + unsigned long addr); + /* + * Called during resolution of UFFDIO_COPY request on the error + * handling path. + * Should revert the operation of ->filemap_add(). + */ + void (*filemap_remove)(struct folio *folio, struct vm_area_struct *vma); }; /* A combined operation mode + behavior flags. */ @@ -133,11 +147,6 @@ static inline uffd_flags_t uffd_flags_se /* Flags controlling behavior. These behavior changes are mode-independent. */ #define MFILL_ATOMIC_WP MFILL_ATOMIC_FLAG(0) -extern int mfill_atomic_install_pte(pmd_t *dst_pmd, - struct vm_area_struct *dst_vma, - unsigned long dst_addr, struct page *page, - bool newly_allocated, uffd_flags_t flags); - extern ssize_t mfill_atomic_copy(struct userfaultfd_ctx *ctx, unsigned long dst_start, unsigned long src_start, unsigned long len, uffd_flags_t flags); --- a/mm/shmem.c~shmem-userfaultfd-implement-shmem-uffd-operations-using-vm_uffd_ops +++ a/mm/shmem.c @@ -3175,118 +3175,73 @@ static struct inode *shmem_get_inode(str #endif /* CONFIG_TMPFS_QUOTA */ #ifdef CONFIG_USERFAULTFD -int shmem_mfill_atomic_pte(pmd_t *dst_pmd, - struct vm_area_struct *dst_vma, - unsigned long dst_addr, - unsigned long src_addr, - uffd_flags_t flags, - struct folio **foliop) +static struct folio *shmem_mfill_folio_alloc(struct vm_area_struct *vma, + unsigned long addr) { - struct inode *inode = file_inode(dst_vma->vm_file); - struct shmem_inode_info *info = SHMEM_I(inode); + struct inode *inode = file_inode(vma->vm_file); struct address_space *mapping = inode->i_mapping; + struct shmem_inode_info *info = SHMEM_I(inode); + pgoff_t pgoff = linear_page_index(vma, addr); gfp_t gfp = mapping_gfp_mask(mapping); - pgoff_t pgoff = linear_page_index(dst_vma, dst_addr); - void *page_kaddr; struct folio *folio; - int ret; - pgoff_t max_off; - if (shmem_inode_acct_blocks(inode, 1)) { - /* - * We may have got a page, returned -ENOENT triggering a retry, - * and now we find ourselves with -ENOMEM. Release the page, to - * avoid a BUG_ON in our caller. - */ - if (unlikely(*foliop)) { - folio_put(*foliop); - *foliop = NULL; - } - return -ENOMEM; - } + if (unlikely(pgoff >= DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE))) + return NULL; - if (!*foliop) { - ret = -ENOMEM; - folio = shmem_alloc_folio(gfp, 0, info, pgoff); - if (!folio) - goto out_unacct_blocks; - - if (uffd_flags_mode_is(flags, MFILL_ATOMIC_COPY)) { - page_kaddr = kmap_local_folio(folio, 0); - /* - * The read mmap_lock is held here. Despite the - * mmap_lock being read recursive a deadlock is still - * possible if a writer has taken a lock. For example: - * - * process A thread 1 takes read lock on own mmap_lock - * process A thread 2 calls mmap, blocks taking write lock - * process B thread 1 takes page fault, read lock on own mmap lock - * process B thread 2 calls mmap, blocks taking write lock - * process A thread 1 blocks taking read lock on process B - * process B thread 1 blocks taking read lock on process A - * - * Disable page faults to prevent potential deadlock - * and retry the copy outside the mmap_lock. - */ - pagefault_disable(); - ret = copy_from_user(page_kaddr, - (const void __user *)src_addr, - PAGE_SIZE); - pagefault_enable(); - kunmap_local(page_kaddr); - - /* fallback to copy_from_user outside mmap_lock */ - if (unlikely(ret)) { - *foliop = folio; - ret = -ENOENT; - /* don't free the page */ - goto out_unacct_blocks; - } - - flush_dcache_folio(folio); - } else { /* ZEROPAGE */ - clear_user_highpage(&folio->page, dst_addr); - } - } else { - folio = *foliop; - VM_BUG_ON_FOLIO(folio_test_large(folio), folio); - *foliop = NULL; + folio = shmem_alloc_folio(gfp, 0, info, pgoff); + if (!folio) + return NULL; + + if (mem_cgroup_charge(folio, vma->vm_mm, GFP_KERNEL)) { + folio_put(folio); + return NULL; } - VM_BUG_ON(folio_test_locked(folio)); - VM_BUG_ON(folio_test_swapbacked(folio)); + return folio; +} + +static int shmem_mfill_filemap_add(struct folio *folio, + struct vm_area_struct *vma, + unsigned long addr) +{ + struct inode *inode = file_inode(vma->vm_file); + struct address_space *mapping = inode->i_mapping; + pgoff_t pgoff = linear_page_index(vma, addr); + gfp_t gfp = mapping_gfp_mask(mapping); + int err; + __folio_set_locked(folio); __folio_set_swapbacked(folio); - __folio_mark_uptodate(folio); - ret = -EFAULT; - max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); - if (unlikely(pgoff >= max_off)) - goto out_release; - - ret = mem_cgroup_charge(folio, dst_vma->vm_mm, gfp); - if (ret) - goto out_release; - ret = shmem_add_to_page_cache(folio, mapping, pgoff, NULL, gfp); - if (ret) - goto out_release; - - ret = mfill_atomic_install_pte(dst_pmd, dst_vma, dst_addr, - &folio->page, true, flags); - if (ret) - goto out_delete_from_cache; + err = shmem_add_to_page_cache(folio, mapping, pgoff, NULL, gfp); + if (err) + goto err_unlock; + if (shmem_inode_acct_blocks(inode, 1)) { + err = -ENOMEM; + goto err_delete_from_cache; + } + + folio_add_lru(folio); shmem_recalc_inode(inode, 1, 0); - folio_unlock(folio); + return 0; -out_delete_from_cache: + +err_delete_from_cache: + filemap_remove_folio(folio); +err_unlock: + folio_unlock(folio); + return err; +} + +static void shmem_mfill_filemap_remove(struct folio *folio, + struct vm_area_struct *vma) +{ + struct inode *inode = file_inode(vma->vm_file); + filemap_remove_folio(folio); -out_release: + shmem_recalc_inode(inode, 0, 0); folio_unlock(folio); - folio_put(folio); -out_unacct_blocks: - shmem_inode_unacct_blocks(inode, 1); - return ret; } static struct folio *shmem_get_folio_noalloc(struct inode *inode, pgoff_t pgoff) @@ -3309,6 +3264,9 @@ static bool shmem_can_userfault(struct v static const struct vm_uffd_ops shmem_uffd_ops = { .can_userfault = shmem_can_userfault, .get_folio_noalloc = shmem_get_folio_noalloc, + .alloc_folio = shmem_mfill_folio_alloc, + .filemap_add = shmem_mfill_filemap_add, + .filemap_remove = shmem_mfill_filemap_remove, }; #endif /* CONFIG_USERFAULTFD */ --- a/mm/userfaultfd.c~shmem-userfaultfd-implement-shmem-uffd-operations-using-vm_uffd_ops +++ a/mm/userfaultfd.c @@ -14,7 +14,6 @@ #include #include #include -#include #include #include #include "internal.h" @@ -338,10 +337,10 @@ static bool mfill_file_over_size(struct * This function handles both MCOPY_ATOMIC_NORMAL and _CONTINUE for both shmem * and anon, and for both shared and private VMAs. */ -int mfill_atomic_install_pte(pmd_t *dst_pmd, - struct vm_area_struct *dst_vma, - unsigned long dst_addr, struct page *page, - bool newly_allocated, uffd_flags_t flags) +static int mfill_atomic_install_pte(pmd_t *dst_pmd, + struct vm_area_struct *dst_vma, + unsigned long dst_addr, struct page *page, + uffd_flags_t flags) { int ret; struct mm_struct *dst_mm = dst_vma->vm_mm; @@ -385,9 +384,6 @@ int mfill_atomic_install_pte(pmd_t *dst_ goto out_unlock; if (page_in_cache) { - /* Usually, cache pages are already added to LRU */ - if (newly_allocated) - folio_add_lru(folio); folio_add_file_rmap_pte(folio, page, dst_vma); } else { folio_add_new_anon_rmap(folio, dst_vma, dst_addr, RMAP_EXCLUSIVE); @@ -402,6 +398,9 @@ int mfill_atomic_install_pte(pmd_t *dst_ set_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte); + if (page_in_cache) + folio_unlock(folio); + /* No need to invalidate - it was non-present before */ update_mmu_cache(dst_vma, dst_addr, dst_pte); ret = 0; @@ -514,13 +513,22 @@ static int __mfill_atomic_pte(struct mfi */ __folio_mark_uptodate(folio); + if (ops->filemap_add) { + ret = ops->filemap_add(folio, state->vma, state->dst_addr); + if (ret) + goto err_folio_put; + } + ret = mfill_atomic_install_pte(state->pmd, state->vma, dst_addr, - &folio->page, true, flags); + &folio->page, flags); if (ret) - goto err_folio_put; + goto err_filemap_remove; return 0; +err_filemap_remove: + if (ops->filemap_remove) + ops->filemap_remove(folio, state->vma); err_folio_put: folio_put(folio); /* Don't return -ENOENT so that our caller won't retry */ @@ -533,6 +541,18 @@ static int mfill_atomic_pte_copy(struct { const struct vm_uffd_ops *ops = vma_uffd_ops(state->vma); + /* + * The normal page fault path for a MAP_PRIVATE mapping in a + * file-backed VMA will invoke the fault, fill the hole in the file and + * COW it right away. The result generates plain anonymous memory. + * So when we are asked to fill a hole in a MAP_PRIVATE mapping, we'll + * generate anonymous memory directly without actually filling the + * hole. For the MAP_PRIVATE case the robustness check only happens in + * the pagetable (to verify it's still none) and not in the page cache. + */ + if (!(state->vma->vm_flags & VM_SHARED)) + ops = &anon_uffd_ops; + return __mfill_atomic_pte(state, ops); } @@ -552,7 +572,8 @@ static int mfill_atomic_pte_zeropage(str spinlock_t *ptl; int ret; - if (mm_forbids_zeropage(dst_vma->vm_mm)) + if (mm_forbids_zeropage(dst_vma->vm_mm) || + (dst_vma->vm_flags & VM_SHARED)) return mfill_atomic_pte_zeroed_folio(state); _dst_pte = pte_mkspecial(pfn_pte(zero_pfn(dst_addr), @@ -609,11 +630,10 @@ static int mfill_atomic_pte_continue(str } ret = mfill_atomic_install_pte(dst_pmd, dst_vma, dst_addr, - page, false, flags); + page, flags); if (ret) goto out_release; - folio_unlock(folio); return 0; out_release: @@ -836,41 +856,19 @@ extern ssize_t mfill_atomic_hugetlb(stru static __always_inline ssize_t mfill_atomic_pte(struct mfill_state *state) { - struct vm_area_struct *dst_vma = state->vma; - unsigned long src_addr = state->src_addr; - unsigned long dst_addr = state->dst_addr; - struct folio **foliop = &state->folio; uffd_flags_t flags = state->flags; - pmd_t *dst_pmd = state->pmd; - ssize_t err; if (uffd_flags_mode_is(flags, MFILL_ATOMIC_CONTINUE)) return mfill_atomic_pte_continue(state); if (uffd_flags_mode_is(flags, MFILL_ATOMIC_POISON)) return mfill_atomic_pte_poison(state); + if (uffd_flags_mode_is(flags, MFILL_ATOMIC_COPY)) + return mfill_atomic_pte_copy(state); + if (uffd_flags_mode_is(flags, MFILL_ATOMIC_ZEROPAGE)) + return mfill_atomic_pte_zeropage(state); - /* - * The normal page fault path for a shmem will invoke the - * fault, fill the hole in the file and COW it right away. The - * result generates plain anonymous memory. So when we are - * asked to fill an hole in a MAP_PRIVATE shmem mapping, we'll - * generate anonymous memory directly without actually filling - * the hole. For the MAP_PRIVATE case the robustness check - * only happens in the pagetable (to verify it's still none) - * and not in the radix tree. - */ - if (!(dst_vma->vm_flags & VM_SHARED)) { - if (uffd_flags_mode_is(flags, MFILL_ATOMIC_COPY)) - err = mfill_atomic_pte_copy(state); - else - err = mfill_atomic_pte_zeropage(state); - } else { - err = shmem_mfill_atomic_pte(dst_pmd, dst_vma, - dst_addr, src_addr, - flags, foliop); - } - - return err; + VM_WARN_ONCE(1, "Unknown UFFDIO operation, flags: %x", flags); + return -EOPNOTSUPP; } static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, _ Patches currently in -mm which might be from rppt@kernel.org are userfaultfd-introduce-mfill_copy_folio_locked-helper.patch userfaultfd-introduce-struct-mfill_state.patch userfaultfd-introduce-mfill_establish_pmd-helper.patch userfaultfd-introduce-mfill_get_vma-and-mfill_put_vma.patch userfaultfd-retry-copying-with-locks-dropped-in-mfill_atomic_pte_copy.patch userfaultfd-move-vma_can_userfault-out-of-line.patch userfaultfd-introduce-vm_uffd_ops.patch shmem-userfaultfd-use-a-vma-callback-to-handle-uffdio_continue.patch userfaultfd-introduce-vm_uffd_ops-alloc_folio.patch shmem-userfaultfd-implement-shmem-uffd-operations-using-vm_uffd_ops.patch userfaultfd-mfill_atomic-remove-retry-logic.patch