From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 66CCB299A84 for ; Mon, 30 Mar 2026 19:34:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774899244; cv=none; b=WtvFBn2r+mzERLQAmxzOh6ODfk1XJwN5J5gwSb5oyIE3AGzArGcwjthyo11wmvTMYXD/mY5hPXbVLbnBz3JATL0hrQMgqffSc+m3P26rK4VdYhAflfow0DhRPH/WJRg3GeASrnpTFMKI1jlZjn4haXl86E8Jd/zi7Ye7b6cExkM= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774899244; c=relaxed/simple; bh=Inwu+343twbOSG66emNXei0NHL3kRJOr3A6b0vXZhII=; h=Date:To:From:Subject:Message-Id; b=MmgSGrDgG6l6Dre8K5y8JVN87TuV5//K3NIgnIDOew/IUpHnmNlYOKhEPPa3W+Y9Y+iyWsi1Bs5FVECV5aPwX6xl4w9jc1RYObXUXEHazVTNnqAZSc23RjU3Lr4oSraUxnoiyU+elSOsoGKiuuiml4r1dZ2OlNOiT24UZAmDuRo= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b=JVuhFMPh; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b="JVuhFMPh" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3D9F6C4CEF7; Mon, 30 Mar 2026 19:34:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1774899244; bh=Inwu+343twbOSG66emNXei0NHL3kRJOr3A6b0vXZhII=; h=Date:To:From:Subject:From; b=JVuhFMPhTn0oTWRfJ+dJmhhIjtqhFNMtvJNKfBBj65YCLMY/tVzHiH9z4A1FwgN/X ZOOZMdl2+rQPnSXFQ24M739Bo57z+SnG48AdeeEwXHEfq9ZwfuwE91vyZOFypSm/mF bDkII9AScB6/Nfpil8oDlIZPLfAHyhPTZkWKY1NA= Date: Mon, 30 Mar 2026 12:34:03 -0700 To: mm-commits@vger.kernel.org,rppt@kernel.org,akpm@linux-foundation.org From: Andrew Morton Subject: [to-be-updated] userfaultfd-introduce-vm_uffd_ops.patch removed from -mm tree Message-Id: <20260330193404.3D9F6C4CEF7@smtp.kernel.org> Precedence: bulk X-Mailing-List: mm-commits@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: The quilt patch titled Subject: userfaultfd: introduce vm_uffd_ops has been removed from the -mm tree. Its filename was userfaultfd-introduce-vm_uffd_ops.patch This patch was dropped because an updated version will be issued ------------------------------------------------------ From: "Mike Rapoport (Microsoft)" Subject: userfaultfd: introduce vm_uffd_ops Date: Fri, 6 Mar 2026 19:18:07 +0200 Current userfaultfd implementation works only with memory managed by core MM: anonymous, shmem and hugetlb. First, there is no fundamental reason to limit userfaultfd support only to the core memory types and userfaults can be handled similarly to regular page faults provided a VMA owner implements appropriate callbacks. Second, historically various code paths were conditioned on vma_is_anonymous(), vma_is_shmem() and is_vm_hugetlb_page() and some of these conditions can be expressed as operations implemented by a particular memory type. Introduce vm_uffd_ops extension to vm_operations_struct that will delegate memory type specific operations to a VMA owner. Operations for anonymous memory are handled internally in userfaultfd using anon_uffd_ops that implicitly assigned to anonymous VMAs. Start with a single operation, ->can_userfault() that will verify that a VMA meets requirements for userfaultfd support at registration time. Implement that method for anonymous, shmem and hugetlb and move relevant parts of vma_can_userfault() into the new callbacks. [rppt@kernel.org: allow registration of WP_ASYNC for any VMA] Link: https://lkml.kernel.org/r/abG5HFV8yoEHOFkh@kernel.org Link: https://lkml.kernel.org/r/20260306171815.3160826-8-rppt@kernel.org Signed-off-by: Mike Rapoport (Microsoft) Cc: Andrea Arcangeli Cc: Axel Rasmussen Cc: Baolin Wang Cc: David Hildenbrand Cc: Hugh Dickins Cc: James Houghton Cc: Liam Howlett Cc: Lorenzo Stoakes (Oracle) Cc: Matthew Wilcox (Oracle) Cc: Michal Hocko Cc: Muchun Song Cc: Nikita Kalyazin Cc: Oscar Salvador Cc: Paolo Bonzini Cc: Peter Xu Cc: Sean Christopherson Cc: Shuah Khan Cc: Suren Baghdasaryan Cc: Vlastimil Babka Signed-off-by: Andrew Morton --- include/linux/mm.h | 5 +++ include/linux/userfaultfd_k.h | 6 ++++ mm/hugetlb.c | 15 ++++++++++ mm/shmem.c | 15 ++++++++++ mm/userfaultfd.c | 44 ++++++++++++++++++++++---------- 5 files changed, 72 insertions(+), 13 deletions(-) --- a/include/linux/mm.h~userfaultfd-introduce-vm_uffd_ops +++ a/include/linux/mm.h @@ -758,6 +758,8 @@ struct vm_fault { */ }; +struct vm_uffd_ops; + /* * These are the virtual MM functions - opening of an area, closing and * unmapping it (needed to keep files on disk up-to-date etc), pointer @@ -865,6 +867,9 @@ struct vm_operations_struct { struct page *(*find_normal_page)(struct vm_area_struct *vma, unsigned long addr); #endif /* CONFIG_FIND_NORMAL_PAGE */ +#ifdef CONFIG_USERFAULTFD + const struct vm_uffd_ops *uffd_ops; +#endif }; #ifdef CONFIG_NUMA_BALANCING --- a/include/linux/userfaultfd_k.h~userfaultfd-introduce-vm_uffd_ops +++ a/include/linux/userfaultfd_k.h @@ -83,6 +83,12 @@ struct userfaultfd_ctx { extern vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason); +/* VMA userfaultfd operations */ +struct vm_uffd_ops { + /* Checks if a VMA can support userfaultfd */ + bool (*can_userfault)(struct vm_area_struct *vma, vm_flags_t vm_flags); +}; + /* A combined operation mode + behavior flags. */ typedef unsigned int __bitwise uffd_flags_t; --- a/mm/hugetlb.c~userfaultfd-introduce-vm_uffd_ops +++ a/mm/hugetlb.c @@ -4792,6 +4792,18 @@ static vm_fault_t hugetlb_vm_op_fault(st return 0; } +#ifdef CONFIG_USERFAULTFD +static bool hugetlb_can_userfault(struct vm_area_struct *vma, + vm_flags_t vm_flags) +{ + return true; +} + +static const struct vm_uffd_ops hugetlb_uffd_ops = { + .can_userfault = hugetlb_can_userfault, +}; +#endif + /* * When a new function is introduced to vm_operations_struct and added * to hugetlb_vm_ops, please consider adding the function to shm_vm_ops. @@ -4805,6 +4817,9 @@ const struct vm_operations_struct hugetl .close = hugetlb_vm_op_close, .may_split = hugetlb_vm_op_split, .pagesize = hugetlb_vm_op_pagesize, +#ifdef CONFIG_USERFAULTFD + .uffd_ops = &hugetlb_uffd_ops, +#endif }; static pte_t make_huge_pte(struct vm_area_struct *vma, struct folio *folio, --- a/mm/shmem.c~userfaultfd-introduce-vm_uffd_ops +++ a/mm/shmem.c @@ -3288,6 +3288,15 @@ out_unacct_blocks: shmem_inode_unacct_blocks(inode, 1); return ret; } + +static bool shmem_can_userfault(struct vm_area_struct *vma, vm_flags_t vm_flags) +{ + return true; +} + +static const struct vm_uffd_ops shmem_uffd_ops = { + .can_userfault = shmem_can_userfault, +}; #endif /* CONFIG_USERFAULTFD */ #ifdef CONFIG_TMPFS @@ -5307,6 +5316,9 @@ static const struct vm_operations_struct .set_policy = shmem_set_policy, .get_policy = shmem_get_policy, #endif +#ifdef CONFIG_USERFAULTFD + .uffd_ops = &shmem_uffd_ops, +#endif }; static const struct vm_operations_struct shmem_anon_vm_ops = { @@ -5316,6 +5328,9 @@ static const struct vm_operations_struct .set_policy = shmem_set_policy, .get_policy = shmem_get_policy, #endif +#ifdef CONFIG_USERFAULTFD + .uffd_ops = &shmem_uffd_ops, +#endif }; int shmem_init_fs_context(struct fs_context *fc) --- a/mm/userfaultfd.c~userfaultfd-introduce-vm_uffd_ops +++ a/mm/userfaultfd.c @@ -34,6 +34,25 @@ struct mfill_state { pmd_t *pmd; }; +static bool anon_can_userfault(struct vm_area_struct *vma, vm_flags_t vm_flags) +{ + /* anonymous memory does not support MINOR mode */ + if (vm_flags & VM_UFFD_MINOR) + return false; + return true; +} + +static const struct vm_uffd_ops anon_uffd_ops = { + .can_userfault = anon_can_userfault, +}; + +static const struct vm_uffd_ops *vma_uffd_ops(struct vm_area_struct *vma) +{ + if (vma_is_anonymous(vma)) + return &anon_uffd_ops; + return vma->vm_ops ? vma->vm_ops->uffd_ops : NULL; +} + static __always_inline bool validate_dst_vma(struct vm_area_struct *dst_vma, unsigned long dst_end) { @@ -2021,34 +2040,33 @@ out: bool vma_can_userfault(struct vm_area_struct *vma, vm_flags_t vm_flags, bool wp_async) { - vm_flags &= __VM_UFFD_FLAGS; + const struct vm_uffd_ops *ops = vma_uffd_ops(vma); - if (vma->vm_flags & VM_DROPPABLE) - return false; - - if ((vm_flags & VM_UFFD_MINOR) && - (!is_vm_hugetlb_page(vma) && !vma_is_shmem(vma))) - return false; + vm_flags &= __VM_UFFD_FLAGS; /* - * If wp async enabled, and WP is the only mode enabled, allow any + * If WP is the only mode enabled and context is wp async, allow any * memory type. */ if (wp_async && (vm_flags == VM_UFFD_WP)) return true; + /* For any other mode reject VMAs that don't implement vm_uffd_ops */ + if (!ops) + return false; + + if (vma->vm_flags & VM_DROPPABLE) + return false; + /* * If user requested uffd-wp but not enabled pte markers for - * uffd-wp, then shmem & hugetlbfs are not supported but only - * anonymous. + * uffd-wp, then only anonymous memory is supported */ if (!uffd_supports_wp_marker() && (vm_flags & VM_UFFD_WP) && !vma_is_anonymous(vma)) return false; - /* By default, allow any of anon|shmem|hugetlb */ - return vma_is_anonymous(vma) || is_vm_hugetlb_page(vma) || - vma_is_shmem(vma); + return ops->can_userfault(vma, vm_flags); } static void userfaultfd_set_vm_flags(struct vm_area_struct *vma, _ Patches currently in -mm which might be from rppt@kernel.org are shmem-userfaultfd-use-a-vma-callback-to-handle-uffdio_continue.patch userfaultfd-introduce-vm_uffd_ops-alloc_folio.patch shmem-userfaultfd-implement-shmem-uffd-operations-using-vm_uffd_ops.patch userfaultfd-mfill_atomic-remove-retry-logic.patch