From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F718C7115C for ; Fri, 20 Jun 2025 19:04:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DD2AE6B009F; Fri, 20 Jun 2025 15:04:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D849A6B00A0; Fri, 20 Jun 2025 15:04:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C243A6B00A1; Fri, 20 Jun 2025 15:04:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id B00796B009F for ; Fri, 20 Jun 2025 15:04:04 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 64000103DA4 for ; Fri, 20 Jun 2025 19:04:04 +0000 (UTC) X-FDA: 83576704008.22.2A8CC69 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf18.hostedemail.com (Postfix) with ESMTP id 5196F1C000D for ; Fri, 20 Jun 2025 19:04:02 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=XmNQ5OGw; spf=pass (imf18.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=quarantine) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1750446242; a=rsa-sha256; cv=none; b=fQ14taZmtiqeKzqfUJIpuEqfVjuZ7CS0Rhj7c/KuZ5Efl9BDQivHrxGV6PJ+6aOtEGdXaG s4x4qn7QWS9DkdxySUBgLXddFY7WnNESi/bSt7vW/CsUK3NxjSd32FE+qo+TFx6Uo2rJ8P 7Pr7it/70cHngH8O1vg5DABuqnUeuAA= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=XmNQ5OGw; spf=pass (imf18.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=quarantine) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1750446242; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=sHY3BJ1O8Kmf60+iubBnseKb0MyvGjRBfhUw5+ZcMhs=; b=ttupPJJ7WMqi+2zM92FqqkoaiOa4ufRL/vZe//Z0AH15eupusNoL086tjL2LHRA830GiT8 C0hP31mnfpbQsQuoEFZAQCkHFsvHEFTQfq17s6M/PYbt8d0sQl/yZFjRqkksHx9WPJOOuz q4DqJOFjQ8Moyi+jjQlM3oNdikghRQ8= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1750446241; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=sHY3BJ1O8Kmf60+iubBnseKb0MyvGjRBfhUw5+ZcMhs=; b=XmNQ5OGwmmttGJ1PEZbMuPbqAIFsfj270GnUDvyZfue4NRcqRg23UtqVVRUf2W7bpB5MWJ TZOlKiR5VOzTi77zwWULq10mWxsVrULSdZ8GhcHomdB4HaAxxl1Tts1D4xokNZvbUPU0u5 XgNGdqSWlQfofVZUgGbh/H07IGoe+eQ= Received: from mail-pl1-f199.google.com (mail-pl1-f199.google.com [209.85.214.199]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-118-RbhqYtEnO4WCbAx1rGJRZQ-1; Fri, 20 Jun 2025 15:04:00 -0400 X-MC-Unique: RbhqYtEnO4WCbAx1rGJRZQ-1 X-Mimecast-MFC-AGG-ID: RbhqYtEnO4WCbAx1rGJRZQ_1750446240 Received: by mail-pl1-f199.google.com with SMTP id d9443c01a7336-2356ce66d7cso27508635ad.1 for ; Fri, 20 Jun 2025 12:04:00 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750446239; x=1751051039; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=sHY3BJ1O8Kmf60+iubBnseKb0MyvGjRBfhUw5+ZcMhs=; b=kr9KPXKs2HbMMDLp98ykEpkMcOdOXGp+e+hdO70P6D7FKSNANpQ8LCpc/ImFPyQGUY zoaVsIhb9rNZ5fVnClyEtvHb+fH3FWVhxN6e5ZuYM5T1y+s2yOF+AgV5jGEE1Qmm3gNe E2Wr0Br3eI78acUAJqnceSLLaJto5QEEQGCmuao/yAoOjLHDmFLrF58ojJuhKeWiZV+Q SPIxJpe96riYalowOpXQXB5Hc/6DkCYM9UQWQr2GQTFOsQoQsRM9klDcplvU9zG57VKa hLVqWdxd212R1SBM7PXs/yMYcXSxF0gQuf+8MhRiPeWF3xMTIt03ePMoHSyr3GMu5wKX 7b9Q== X-Gm-Message-State: AOJu0YwwxHjeh24XicVKkxgOO6RfQwJwpAEs9AoebMUJueihUJsXhFT5 p/sbTMfh+/XMeCJd+qP913VMLEWVif9jpFQH+J6immBvQ0ZDXJZ9nIHzrRwqtawiuZBue1c9jL8 Irw+Tfj3UHO2B8X2O2OkQQQ5Hd2qc3TUt3td4/L/9ciF9bXt6ePXHbinWdVlcpew02s5lG+Naxa MyLdH6f5b9XkqJr1ddFMM/Ud3sY/8Gp1lujQ== X-Gm-Gg: ASbGncsm8KQDruUZuSwvt4RB8yfON9eJ0GtDbYsP1kaIDDuJ1ZEjCPdmVtZdIiGYJhg 41gw7h5dDivo9SxrnJsAHVcNYOpju6XI//aCT6It8F94HqdvoeyAaSath51zPg8PvLGv9otkADY 8QPLhg+t4RHQEh6wD6ytJ7vrDrxnhj1WpTPveWaCRlgtdgN7BA80NVL9C0SR+MDZmX1LyGzN+D/ lzemKRPS5FJrFMhr65QbGMLYOxwj0GSrKhp4MfznIqEFLHf4MYQ95voY+PxeHdlTjWp/5D+85Ro +2mGLnfUkkc= X-Received: by 2002:a17:902:c941:b0:235:779:ede3 with SMTP id d9443c01a7336-237d9a74d3fmr63714885ad.41.1750446239200; Fri, 20 Jun 2025 12:03:59 -0700 (PDT) X-Google-Smtp-Source: AGHT+IExmitzxGaktnhz9QmbqTtnlFfBEk+Ufz7lYfnEFxYhJCludK4ajrFoYpJZXCG/xUHyP5XbWg== X-Received: by 2002:a17:902:c941:b0:235:779:ede3 with SMTP id d9443c01a7336-237d9a74d3fmr63714195ad.41.1750446238652; Fri, 20 Jun 2025 12:03:58 -0700 (PDT) Received: from x1.com ([85.131.185.92]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-237d8609968sm24235535ad.136.2025.06.20.12.03.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 20 Jun 2025 12:03:58 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Nikita Kalyazin , peterx@redhat.com, Hugh Dickins , Oscar Salvador , Michal Hocko , David Hildenbrand , Muchun Song , Andrea Arcangeli , Ujwal Kundur , Suren Baghdasaryan , Andrew Morton , Vlastimil Babka , "Liam R . Howlett" , James Houghton , Mike Rapoport , Lorenzo Stoakes , Axel Rasmussen Subject: [PATCH 4/4] mm: Apply vm_uffd_ops API to core mm Date: Fri, 20 Jun 2025 15:03:42 -0400 Message-ID: <20250620190342.1780170-5-peterx@redhat.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250620190342.1780170-1-peterx@redhat.com> References: <20250620190342.1780170-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: U2ilVVVJbqcPIvWd1oeSi8lIiwGWxjoALENPKm-ZXns_1750446240 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit content-type: text/plain; charset="US-ASCII"; x-default=true X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 5196F1C000D X-Stat-Signature: fr88awcjsz4w8aaunt9x7jf9shnb6t4w X-Rspam-User: X-HE-Tag: 1750446242-103852 X-HE-Meta: U2FsdGVkX19uLT4NRTVkVxrtqI1EAJwcCJvw+9oePgaP4c8DqUslcUsDSCl212lbVBKwhCZPHYoivwofeOMvBkii2YjgprwMTJPm6twiWX0XeTsHG02LvrJNU4ZlgQHr4bnsiYM25RL18PeBBnf6wiDIRjCdg/V0FeT1wuzzyjtRUdXfwfAcvFbws6HEKmD0pfWAk3/SHk8V4Hr7pMTu166wOWbw7DEiqq5GQZfHPqmnwC+7fG27atAhBRueio8ADZLHpK4fAzxl90KkIUqTzxGfYgbxoCdTY1WwiuWjtcH1Yt38/bSFamtovFsu7FseVai67YtiNas59oCxx1Fjhr/y6qWn/AgbkKx01wf8L9ty1+RfgiFXWZ1330ZfgSOWz8tnHsbzop28XTCE9oWbJHZm+R6DcAGWHzBU3fXC5mc3r06tBYLMlFaLXoMI0VsWs68Gwy4skYpLP2aJyUgs8eUeOlCC1VhOgxnNJcUapmp/0ZVBsNxR++VUGDzj+yO6dxIPsTLvdW88JeVPro8Tc/ek7QpClsbGA9ZHGQ1QaIkla/RWWUki0dE7xPA6ErnkorHcTSUFnPbu3d02UJNtek8nBaZm8X6/vI+WcANf9OvzE4nHTg1wGgHkSygY30hIEjOQ2o6vYI+eiJdBxsrmsr7gBfO4+pvtC8G8hSDmCkbG3/So6zJRZVMxDdlOnZHE/XRg+5IbWFfaxVasIHpgTK4ILtrFffqB7qwZQ3SpdfkVr9T2zDZAlBNLXoZRShQnhp5Ccjl+vsuclc+j9tQ2HPMjyDtwEVP6gXGmJaNJLQ6iwS1AM8VPovHPWXdekR3j7S35c3aM6mA70nKNEPCzvlQUb8+SJwWKGadNP2J6eZRiESvnMo/uYe9+HlR2GxNoGyU5IlfIK4mLKMz658JvvU5UxB7E1TTadd65ukTHYH+uJnujeaVAPY4vU9zYOYMyn++ibn9yVgKPqoiR0u8 fAu85BfF dBQdbl+tuhhKPLPlzsgPLUxVUJlYp/g+oxVThh188T0b8dnzMDZA5GdB0VmZB9hBLIGIVuKSRkWzzCdJACxOCvxQYVgxYX8NiuAy43CYIBefsfYxnVfd3SMK17w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch completely moves the old userfaultfd core to use the new vm_uffd_ops API. After this change, existing file systems will start to use the new API for userfault operations. When at it, moving vma_can_userfault() into mm/userfaultfd.c instead, because it's getting too big. It's only used in slow paths so it shouldn't be an issue. This will also remove quite some hard-coded checks for either shmem or hugetlbfs. Now all the old checks should still work but with vm_uffd_ops. Note that anonymous memory will still need to be processed separately because it doesn't have vm_ops at all. Signed-off-by: Peter Xu --- include/linux/shmem_fs.h | 14 ----- include/linux/userfaultfd_k.h | 46 ++++---------- mm/shmem.c | 2 +- mm/userfaultfd.c | 115 +++++++++++++++++++++++++--------- 4 files changed, 101 insertions(+), 76 deletions(-) diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h index 6d0f9c599ff7..2f5b7b295cf6 100644 --- a/include/linux/shmem_fs.h +++ b/include/linux/shmem_fs.h @@ -195,20 +195,6 @@ static inline pgoff_t shmem_fallocend(struct inode *inode, pgoff_t eof) extern bool shmem_charge(struct inode *inode, long pages); extern void shmem_uncharge(struct inode *inode, long pages); -#ifdef CONFIG_USERFAULTFD -#ifdef CONFIG_SHMEM -extern int shmem_mfill_atomic_pte(pmd_t *dst_pmd, - struct vm_area_struct *dst_vma, - unsigned long dst_addr, - unsigned long src_addr, - uffd_flags_t flags, - struct folio **foliop); -#else /* !CONFIG_SHMEM */ -#define shmem_mfill_atomic_pte(dst_pmd, dst_vma, dst_addr, \ - src_addr, flags, foliop) ({ BUG(); 0; }) -#endif /* CONFIG_SHMEM */ -#endif /* CONFIG_USERFAULTFD */ - /* * Used space is stored as unsigned 64-bit value in bytes but * quota core supports only signed 64-bit values so use that diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index e79c724b3b95..4e56ad423a4a 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -85,9 +85,14 @@ extern vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason); #define MFILL_ATOMIC_FLAG(nr) ((__force uffd_flags_t) MFILL_ATOMIC_BIT(nr)) #define MFILL_ATOMIC_MODE_MASK ((__force uffd_flags_t) (MFILL_ATOMIC_BIT(0) - 1)) +static inline enum mfill_atomic_mode uffd_flags_get_mode(uffd_flags_t flags) +{ + return (enum mfill_atomic_mode)(flags & MFILL_ATOMIC_MODE_MASK); +} + static inline bool uffd_flags_mode_is(uffd_flags_t flags, enum mfill_atomic_mode expected) { - return (flags & MFILL_ATOMIC_MODE_MASK) == ((__force uffd_flags_t) expected); + return uffd_flags_get_mode(flags) == expected; } static inline uffd_flags_t uffd_flags_set_mode(uffd_flags_t flags, enum mfill_atomic_mode mode) @@ -196,41 +201,16 @@ static inline bool userfaultfd_armed(struct vm_area_struct *vma) return vma->vm_flags & __VM_UFFD_FLAGS; } -static inline bool vma_can_userfault(struct vm_area_struct *vma, - unsigned long vm_flags, - bool wp_async) +static inline const vm_uffd_ops *vma_get_uffd_ops(struct vm_area_struct *vma) { - vm_flags &= __VM_UFFD_FLAGS; - - if (vma->vm_flags & VM_DROPPABLE) - return false; - - if ((vm_flags & VM_UFFD_MINOR) && - (!is_vm_hugetlb_page(vma) && !vma_is_shmem(vma))) - return false; - - /* - * If wp async enabled, and WP is the only mode enabled, allow any - * memory type. - */ - if (wp_async && (vm_flags == VM_UFFD_WP)) - return true; - -#ifndef CONFIG_PTE_MARKER_UFFD_WP - /* - * If user requested uffd-wp but not enabled pte markers for - * uffd-wp, then shmem & hugetlbfs are not supported but only - * anonymous. - */ - if ((vm_flags & VM_UFFD_WP) && !vma_is_anonymous(vma)) - return false; -#endif - - /* By default, allow any of anon|shmem|hugetlb */ - return vma_is_anonymous(vma) || is_vm_hugetlb_page(vma) || - vma_is_shmem(vma); + if (vma->vm_ops && vma->vm_ops->userfaultfd_ops) + return vma->vm_ops->userfaultfd_ops; + return NULL; } +bool vma_can_userfault(struct vm_area_struct *vma, + unsigned long vm_flags, bool wp_async); + static inline bool vma_has_uffd_without_event_remap(struct vm_area_struct *vma) { struct userfaultfd_ctx *uffd_ctx = vma->vm_userfaultfd_ctx.ctx; diff --git a/mm/shmem.c b/mm/shmem.c index bd0a29000318..4d71fc7be358 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -3158,7 +3158,7 @@ static int shmem_uffd_get_folio(struct inode *inode, pgoff_t pgoff, return shmem_get_folio(inode, pgoff, 0, folio, SGP_NOALLOC); } -int shmem_mfill_atomic_pte(pmd_t *dst_pmd, +static int shmem_mfill_atomic_pte(pmd_t *dst_pmd, struct vm_area_struct *dst_vma, unsigned long dst_addr, unsigned long src_addr, diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 879505c6996f..61783ff2d335 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -14,12 +14,48 @@ #include #include #include -#include #include #include #include "internal.h" #include "swap.h" +bool vma_can_userfault(struct vm_area_struct *vma, + unsigned long vm_flags, bool wp_async) +{ + unsigned long supported; + + if (vma->vm_flags & VM_DROPPABLE) + return false; + + vm_flags &= __VM_UFFD_FLAGS; + +#ifndef CONFIG_PTE_MARKER_UFFD_WP + /* + * If user requested uffd-wp but not enabled pte markers for + * uffd-wp, then any file system (like shmem or hugetlbfs) are not + * supported but only anonymous. + */ + if ((vm_flags & VM_UFFD_WP) && !vma_is_anonymous(vma)) + return false; +#endif + /* + * If wp async enabled, and WP is the only mode enabled, allow any + * memory type. + */ + if (wp_async && (vm_flags == VM_UFFD_WP)) + return true; + + if (vma_is_anonymous(vma)) + /* Anonymous has no page cache, MINOR not supported */ + supported = VM_UFFD_MISSING | VM_UFFD_WP; + else if (vma_get_uffd_ops(vma)) + supported = vma_get_uffd_ops(vma)->uffd_features; + else + return false; + + return !(vm_flags & (~supported)); +} + static __always_inline bool validate_dst_vma(struct vm_area_struct *dst_vma, unsigned long dst_end) { @@ -384,11 +420,15 @@ static int mfill_atomic_pte_continue(pmd_t *dst_pmd, { struct inode *inode = file_inode(dst_vma->vm_file); pgoff_t pgoff = linear_page_index(dst_vma, dst_addr); + const vm_uffd_ops *uffd_ops = vma_get_uffd_ops(dst_vma); struct folio *folio; struct page *page; int ret; - ret = shmem_get_folio(inode, pgoff, 0, &folio, SGP_NOALLOC); + if (WARN_ON_ONCE(!uffd_ops || !uffd_ops->uffd_get_folio)) + return -EINVAL; + + ret = uffd_ops->uffd_get_folio(inode, pgoff, &folio); /* Our caller expects us to return -EFAULT if we failed to find folio */ if (ret == -ENOENT) ret = -EFAULT; @@ -504,18 +544,6 @@ static __always_inline ssize_t mfill_atomic_hugetlb( u32 hash; struct address_space *mapping; - /* - * There is no default zero huge page for all huge page sizes as - * supported by hugetlb. A PMD_SIZE huge pages may exist as used - * by THP. Since we can not reliably insert a zero page, this - * feature is not supported. - */ - if (uffd_flags_mode_is(flags, MFILL_ATOMIC_ZEROPAGE)) { - up_read(&ctx->map_changing_lock); - uffd_mfill_unlock(dst_vma); - return -EINVAL; - } - src_addr = src_start; dst_addr = dst_start; copied = 0; @@ -686,14 +714,55 @@ static __always_inline ssize_t mfill_atomic_pte(pmd_t *dst_pmd, err = mfill_atomic_pte_zeropage(dst_pmd, dst_vma, dst_addr); } else { - err = shmem_mfill_atomic_pte(dst_pmd, dst_vma, - dst_addr, src_addr, - flags, foliop); + const vm_uffd_ops *uffd_ops = vma_get_uffd_ops(dst_vma); + + if (WARN_ON_ONCE(!uffd_ops || !uffd_ops->uffd_copy)) { + err = -EINVAL; + } else { + err = uffd_ops->uffd_copy(dst_pmd, dst_vma, + dst_addr, src_addr, + flags, foliop); + } } return err; } +static inline bool +vma_uffd_ops_supported(struct vm_area_struct *vma, uffd_flags_t flags) +{ + enum mfill_atomic_mode mode = uffd_flags_get_mode(flags); + const vm_uffd_ops *uffd_ops; + unsigned long uffd_ioctls; + + if ((flags & MFILL_ATOMIC_WP) && !(vma->vm_flags & VM_UFFD_WP)) + return false; + + /* Anonymous supports everything except CONTINUE */ + if (vma_is_anonymous(vma)) + return mode != MFILL_ATOMIC_CONTINUE; + + uffd_ops = vma_get_uffd_ops(vma); + if (!uffd_ops) + return false; + + uffd_ioctls = uffd_ops->uffd_ioctls; + switch (mode) { + case MFILL_ATOMIC_COPY: + return uffd_ioctls & BIT(_UFFDIO_COPY); + case MFILL_ATOMIC_ZEROPAGE: + return uffd_ioctls & BIT(_UFFDIO_ZEROPAGE); + case MFILL_ATOMIC_CONTINUE: + if (!(vma->vm_flags & VM_SHARED)) + return false; + return uffd_ioctls & BIT(_UFFDIO_CONTINUE); + case MFILL_ATOMIC_POISON: + return uffd_ioctls & BIT(_UFFDIO_POISON); + default: + return false; + } +} + static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, unsigned long dst_start, unsigned long src_start, @@ -752,11 +821,7 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, dst_vma->vm_flags & VM_SHARED)) goto out_unlock; - /* - * validate 'mode' now that we know the dst_vma: don't allow - * a wrprotect copy if the userfaultfd didn't register as WP. - */ - if ((flags & MFILL_ATOMIC_WP) && !(dst_vma->vm_flags & VM_UFFD_WP)) + if (!vma_uffd_ops_supported(dst_vma, flags)) goto out_unlock; /* @@ -766,12 +831,6 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, return mfill_atomic_hugetlb(ctx, dst_vma, dst_start, src_start, len, flags); - if (!vma_is_anonymous(dst_vma) && !vma_is_shmem(dst_vma)) - goto out_unlock; - if (!vma_is_shmem(dst_vma) && - uffd_flags_mode_is(flags, MFILL_ATOMIC_CONTINUE)) - goto out_unlock; - while (src_addr < src_start + len) { pmd_t dst_pmdval; -- 2.49.0