From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E175C39B497 for ; Mon, 30 Mar 2026 19:43:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774899840; cv=none; b=Ihoqr2cMPif3PjgfPY2pwMSLhGoXEbfFxcTG6AayuEhT2hiwbUS7ErALKqcGdoBAVBzKbMq0BNuKp7UcmAWtpdYorX6JMfMIKJTNpu7Zip2Od6DfDekUiPZA4rV3v2SNdlr8XpPIycmpPWaUTLgl4aLhodB3kRD6gtfhN3e64ZU= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774899840; c=relaxed/simple; bh=seV8wnvx9CEOMdLUNJmzLh8upu025TluRyzxr+ieck8=; h=Date:To:From:Subject:Message-Id; b=juvbaTwBbsKlAQTjOYMpACKxPS9Xm7JrgWZfLYlLPVYdVppBH/dcX3PqLHdfVEgmV8fgOs+X4xnuaTcSckcfS2OMHNV4lnAXuXJxAswoxtYf7nHt4FJYzKgU37cZaEkRvtLuMSOlPa7zEpbpqaIfTTzmVcEpaT9MLviWmudbG8o= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b=S0uwq7gO; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b="S0uwq7gO" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 75944C4CEF7; Mon, 30 Mar 2026 19:43:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1774899839; bh=seV8wnvx9CEOMdLUNJmzLh8upu025TluRyzxr+ieck8=; h=Date:To:From:Subject:From; b=S0uwq7gO1lHy4NbvHCbXoVIMqL1DYlCRW1OdVkZ5nbgqCDVtbBmNnPWuGICWOdBrs HSzfSTfw6wFD6OhXBt9pstngA18T2g7NMP3ArQAgTmQJIjFVQwcLQWnqotZJChuwtD mG/L0x8WqiKsKt6ggJk2XXO7dlLSbA4kdHMmDN7I= Date: Mon, 30 Mar 2026 12:43:58 -0700 To: mm-commits@vger.kernel.org,rppt@kernel.org,akpm@linux-foundation.org From: Andrew Morton Subject: + shmem-userfaultfd-use-a-vma-callback-to-handle-uffdio_continue.patch added to mm-unstable branch Message-Id: <20260330194359.75944C4CEF7@smtp.kernel.org> Precedence: bulk X-Mailing-List: mm-commits@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: The patch titled Subject: shmem, userfaultfd: use a VMA callback to handle UFFDIO_CONTINUE has been added to the -mm mm-unstable branch. Its filename is shmem-userfaultfd-use-a-vma-callback-to-handle-uffdio_continue.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/shmem-userfaultfd-use-a-vma-callback-to-handle-uffdio_continue.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via various branches at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there most days ------------------------------------------------------ From: "Mike Rapoport (Microsoft)" Subject: shmem, userfaultfd: use a VMA callback to handle UFFDIO_CONTINUE Date: Mon, 30 Mar 2026 13:11:09 +0300 When userspace resolves a page fault in a shmem VMA with UFFDIO_CONTINUE it needs to get a folio that already exists in the pagecache backing that VMA. Instead of using shmem_get_folio() for that, add a get_folio_noalloc() method to 'struct vm_uffd_ops' that will return a folio if it exists in the VMA's pagecache at given pgoff. Implement get_folio_noalloc() method for shmem and slightly refactor userfaultfd's mfill_get_vma() and mfill_atomic_pte_continue() to support this new API. Link: https://lkml.kernel.org/r/20260330101116.1117699-9-rppt@kernel.org Signed-off-by: Mike Rapoport (Microsoft) Reviewed-by: James Houghton Cc: Andrea Arcangeli Cc: Andrei Vagin Cc: Axel Rasmussen Cc: Baolin Wang Cc: David Hildenbrand (Arm) Cc: Harry Yoo Cc: Hugh Dickins Cc: Liam Howlett Cc: Lorenzo Stoakes (Oracle) Cc: Matthew Wilcox (Oracle) Cc: Michal Hocko Cc: Muchun Song Cc: Nikita Kalyazin Cc: Oscar Salvador Cc: Paolo Bonzini Cc: Peter Xu Cc: Sean Christopherson Cc: Shuah Khan Cc: Suren Baghdasaryan Cc: Vlastimil Babka Signed-off-by: Andrew Morton --- include/linux/userfaultfd_k.h | 7 ++++++ mm/shmem.c | 15 +++++++++++++- mm/userfaultfd.c | 34 ++++++++++++++++---------------- 3 files changed, 39 insertions(+), 17 deletions(-) --- a/include/linux/userfaultfd_k.h~shmem-userfaultfd-use-a-vma-callback-to-handle-uffdio_continue +++ a/include/linux/userfaultfd_k.h @@ -87,6 +87,13 @@ extern vm_fault_t handle_userfault(struc struct vm_uffd_ops { /* Checks if a VMA can support userfaultfd */ bool (*can_userfault)(struct vm_area_struct *vma, vm_flags_t vm_flags); + /* + * Called to resolve UFFDIO_CONTINUE request. + * Should return the folio found at pgoff in the VMA's pagecache if it + * exists or ERR_PTR otherwise. + * The returned folio is locked and with reference held. + */ + struct folio *(*get_folio_noalloc)(struct inode *inode, pgoff_t pgoff); }; /* A combined operation mode + behavior flags. */ --- a/mm/shmem.c~shmem-userfaultfd-use-a-vma-callback-to-handle-uffdio_continue +++ a/mm/shmem.c @@ -3289,13 +3289,26 @@ out_unacct_blocks: return ret; } +static struct folio *shmem_get_folio_noalloc(struct inode *inode, pgoff_t pgoff) +{ + struct folio *folio; + int err; + + err = shmem_get_folio(inode, pgoff, 0, &folio, SGP_NOALLOC); + if (err) + return ERR_PTR(err); + + return folio; +} + static bool shmem_can_userfault(struct vm_area_struct *vma, vm_flags_t vm_flags) { return true; } static const struct vm_uffd_ops shmem_uffd_ops = { - .can_userfault = shmem_can_userfault, + .can_userfault = shmem_can_userfault, + .get_folio_noalloc = shmem_get_folio_noalloc, }; #endif /* CONFIG_USERFAULTFD */ --- a/mm/userfaultfd.c~shmem-userfaultfd-use-a-vma-callback-to-handle-uffdio_continue +++ a/mm/userfaultfd.c @@ -191,6 +191,7 @@ static int mfill_get_vma(struct mfill_st struct userfaultfd_ctx *ctx = state->ctx; uffd_flags_t flags = state->flags; struct vm_area_struct *dst_vma; + const struct vm_uffd_ops *ops; int err; /* @@ -232,10 +233,12 @@ static int mfill_get_vma(struct mfill_st if (is_vm_hugetlb_page(dst_vma)) return 0; - if (!vma_is_anonymous(dst_vma) && !vma_is_shmem(dst_vma)) + ops = vma_uffd_ops(dst_vma); + if (!ops) goto out_unlock; - if (!vma_is_shmem(dst_vma) && - uffd_flags_mode_is(flags, MFILL_ATOMIC_CONTINUE)) + + if (uffd_flags_mode_is(flags, MFILL_ATOMIC_CONTINUE) && + !ops->get_folio_noalloc) goto out_unlock; return 0; @@ -575,6 +578,7 @@ out: static int mfill_atomic_pte_continue(struct mfill_state *state) { struct vm_area_struct *dst_vma = state->vma; + const struct vm_uffd_ops *ops = vma_uffd_ops(dst_vma); unsigned long dst_addr = state->dst_addr; pgoff_t pgoff = linear_page_index(dst_vma, dst_addr); struct inode *inode = file_inode(dst_vma->vm_file); @@ -584,17 +588,16 @@ static int mfill_atomic_pte_continue(str struct page *page; int ret; - ret = shmem_get_folio(inode, pgoff, 0, &folio, SGP_NOALLOC); - /* Our caller expects us to return -EFAULT if we failed to find folio */ - if (ret == -ENOENT) - ret = -EFAULT; - if (ret) - goto out; - if (!folio) { - ret = -EFAULT; - goto out; + if (!ops) { + VM_WARN_ONCE(1, "UFFDIO_CONTINUE for unsupported VMA"); + return -EOPNOTSUPP; } + folio = ops->get_folio_noalloc(inode, pgoff); + /* Our caller expects us to return -EFAULT if we failed to find folio */ + if (IS_ERR_OR_NULL(folio)) + return -EFAULT; + page = folio_file_page(folio, pgoff); if (PageHWPoison(page)) { ret = -EIO; @@ -607,13 +610,12 @@ static int mfill_atomic_pte_continue(str goto out_release; folio_unlock(folio); - ret = 0; -out: - return ret; + return 0; + out_release: folio_unlock(folio); folio_put(folio); - goto out; + return ret; } /* Handles UFFDIO_POISON for all non-hugetlb VMAs. */ _ Patches currently in -mm which might be from rppt@kernel.org are userfaultfd-introduce-mfill_copy_folio_locked-helper.patch userfaultfd-introduce-struct-mfill_state.patch userfaultfd-introduce-mfill_establish_pmd-helper.patch userfaultfd-introduce-mfill_get_vma-and-mfill_put_vma.patch userfaultfd-retry-copying-with-locks-dropped-in-mfill_atomic_pte_copy.patch userfaultfd-move-vma_can_userfault-out-of-line.patch userfaultfd-introduce-vm_uffd_ops.patch shmem-userfaultfd-use-a-vma-callback-to-handle-uffdio_continue.patch userfaultfd-introduce-vm_uffd_ops-alloc_folio.patch shmem-userfaultfd-implement-shmem-uffd-operations-using-vm_uffd_ops.patch userfaultfd-mfill_atomic-remove-retry-logic.patch