From: Yongting Lin <linyongting@bytedance.com>
To: anthony.yznaga@oracle.com
Cc: akpm@linux-foundation.org, andreyknvl@gmail.com, arnd@arndb.de,
brauner@kernel.org, catalin.marinas@arm.com,
dave.hansen@intel.com, david@redhat.com, ebiederm@xmission.com,
khalid@kernel.org, linux-arch@vger.kernel.org,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
luto@kernel.org, markhemm@googlemail.com, maz@kernel.org,
mhiramat@kernel.org, neilb@suse.de, pcc@google.com,
rostedt@goodmis.org, vasily.averin@linux.dev,
viro@zeniv.linux.org.uk, willy@infradead.org,
xhao@linux.alibaba.com
Subject: Re: [PATCH v2 13/20] x86/mm: enable page table sharing
Date: Tue, 12 Aug 2025 21:46:55 +0800 [thread overview]
Message-ID: <20250812134655.68614-1-linyongting@bytedance.com> (raw)
In-Reply-To: <20250404021902.48863-14-anthony.yznaga@oracle.com>
Hi,
On 4/4/25 10:18 AM, Anthony Yznaga wrote:
> Enable x86 support for handling page faults in an mshare region by
> redirecting page faults to operate on the mshare mm_struct and vmas
> contained in it.
> Some permissions checks are done using vma flags in architecture-specfic
> fault handling code so the actual vma needed to complete the handling
> is acquired before calling handle_mm_fault(). Because of this an
> ARCH_SUPPORTS_MSHARE config option is added.
>
> Signed-off-by: Anthony Yznaga <anthony.yznaga@oracle.com>
> ---
> arch/Kconfig | 3 +++
> arch/x86/Kconfig | 1 +
> arch/x86/mm/fault.c | 37 ++++++++++++++++++++++++++++++++++++-
> mm/Kconfig | 2 +-
> 4 files changed, 41 insertions(+), 2 deletions(-)
>
> diff --git a/arch/Kconfig b/arch/Kconfig
> index 9f6eb09ef12d..2e000fefe9b3 100644
> --- a/arch/Kconfig
> +++ b/arch/Kconfig
> @@ -1652,6 +1652,9 @@ config HAVE_ARCH_PFN_VALID
> config ARCH_SUPPORTS_DEBUG_PAGEALLOC
> bool
>
> +config ARCH_SUPPORTS_MSHARE
> + bool
> +
> config ARCH_SUPPORTS_PAGE_TABLE_CHECK
> bool
>
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index 1502fd0c3c06..1f1779decb44 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -125,6 +125,7 @@ config X86
> select ARCH_SUPPORTS_ACPI
> select ARCH_SUPPORTS_ATOMIC_RMW
> select ARCH_SUPPORTS_DEBUG_PAGEALLOC
> + select ARCH_SUPPORTS_MSHARE if X86_64
> select ARCH_SUPPORTS_PAGE_TABLE_CHECK if X86_64
> select ARCH_SUPPORTS_NUMA_BALANCING if X86_64
> select ARCH_SUPPORTS_KMAP_LOCAL_FORCE_MAP if NR_CPUS <= 4096
> diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
> index 296d294142c8..49659d2f9316 100644
> --- a/arch/x86/mm/fault.c
> +++ b/arch/x86/mm/fault.c
> @@ -1216,6 +1216,8 @@ void do_user_addr_fault(struct pt_regs *regs,
> struct mm_struct *mm;
> vm_fault_t fault;
> unsigned int flags = FAULT_FLAG_DEFAULT;
> + bool is_shared_vma;
> + unsigned long addr;
>
> tsk = current;
> mm = tsk->mm;
> @@ -1329,6 +1331,12 @@ void do_user_addr_fault(struct pt_regs *regs,
> if (!vma)
> goto lock_mmap;
>
> + /* mshare does not support per-VMA locks yet */
> + if (vma_is_mshare(vma)) {
> + vma_end_read(vma);
> + goto lock_mmap;
> + }
> +
> if (unlikely(access_error(error_code, vma))) {
> bad_area_access_error(regs, error_code, address, NULL, vma);
> count_vm_vma_lock_event(VMA_LOCK_SUCCESS);
> @@ -1357,17 +1365,38 @@ void do_user_addr_fault(struct pt_regs *regs,
> lock_mmap:
>
> retry:
> + addr = address;
> + is_shared_vma = false;
> vma = lock_mm_and_find_vma(mm, address, regs);
> if (unlikely(!vma)) {
> bad_area_nosemaphore(regs, error_code, address);
> return;
> }
>
> + if (unlikely(vma_is_mshare(vma))) {
> + fault = find_shared_vma(&vma, &addr);
> +
> + if (fault) {
> + mmap_read_unlock(mm);
> + goto done;
> + }
> +
> + if (!vma) {
> + mmap_read_unlock(mm);
> + bad_area_nosemaphore(regs, error_code, address);
> + return;
> + }
> +
> + is_shared_vma = true;
> + }
> +
> /*
> * Ok, we have a good vm_area for this memory access, so
> * we can handle it..
> */
> if (unlikely(access_error(error_code, vma))) {
> + if (unlikely(is_shared_vma))
> + mmap_read_unlock(vma->vm_mm);
> bad_area_access_error(regs, error_code, address, mm, vma);
> return;
> }
> @@ -1385,7 +1414,11 @@ void do_user_addr_fault(struct pt_regs *regs,
> * userland). The return to userland is identified whenever
> * FAULT_FLAG_USER|FAULT_FLAG_KILLABLE are both set in flags.
> */
> - fault = handle_mm_fault(vma, address, flags, regs);
> + fault = handle_mm_fault(vma, addr, flags, regs);
> +
> + if (unlikely(is_shared_vma) && ((fault & VM_FAULT_COMPLETED) ||
> + (fault & VM_FAULT_RETRY) || fault_signal_pending(fault, regs)))
> + mmap_read_unlock(mm);
I was backporting these patches of mshare to 5.15 kernel and trying to do some
basic tests. Then found a potential issue.
Reaching here means find_shared_vma function has been executed successfully
and host_mm->mmap_lock has got locked.
When returned fault variable has VM_FAULT_COMPLETED or VM_FAULT_RETRY flags,
or fault_signal_pending(fault, regs) takes true, there is not chance to release
locks of both mm and host_mm(i.e. vma->vm_mm) in the following Snippet of Code.
As a result, needs to release vma->vm_mm.mmap_lock as well.
So it is supposed to be like below:
- fault = handle_mm_fault(vma, address, flags, regs);
+ fault = handle_mm_fault(vma, addr, flags, regs);
+
+ if (unlikely(is_shared_vma) && ((fault & VM_FAULT_COMPLETED) ||
+ (fault & VM_FAULT_RETRY) || fault_signal_pending(fault, regs))) {
+ mmap_read_unlock(vma->vm_mm);
+ mmap_read_unlock(mm);
+ }
>
> if (fault_signal_pending(fault, regs)) {
> /*
> @@ -1413,6 +1446,8 @@ void do_user_addr_fault(struct pt_regs *regs,
> goto retry;
> }
>
> + if (unlikely(is_shared_vma))
> + mmap_read_unlock(vma->vm_mm);
> mmap_read_unlock(mm);
> done:
> if (likely(!(fault & VM_FAULT_ERROR)))
> diff --git a/mm/Kconfig b/mm/Kconfig
> index e6c90db83d01..8a5a159457f2 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -1344,7 +1344,7 @@ config PT_RECLAIM
>
> config MSHARE
> bool "Mshare"
> - depends on MMU
> + depends on MMU && ARCH_SUPPORTS_MSHARE
> help
> Enable msharefs: A ram-based filesystem that allows multiple
> processes to share page table entries for shared pages. A file
Yongting Lin.
next prev parent reply other threads:[~2025-08-12 13:47 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-04 2:18 [PATCH v2 00/20] Add support for shared PTEs across processes Anthony Yznaga
2025-04-04 2:18 ` [PATCH v2 01/20] mm: Add msharefs filesystem Anthony Yznaga
2025-04-04 2:18 ` [PATCH v2 02/20] mm/mshare: pre-populate msharefs with information file Anthony Yznaga
2025-04-04 2:18 ` [PATCH v2 03/20] mm/mshare: make msharefs writable and support directories Anthony Yznaga
2025-04-04 2:18 ` [PATCH v2 04/20] mm/mshare: allocate an mm_struct for msharefs files Anthony Yznaga
2025-04-04 2:18 ` [PATCH v2 05/20] mm/mshare: add ways to set the size of an mshare region Anthony Yznaga
2025-04-04 2:18 ` [PATCH v2 06/20] mm/mshare: Add a vma flag to indicate " Anthony Yznaga
2025-04-04 2:18 ` [PATCH v2 07/20] mm/mshare: Add mmap support Anthony Yznaga
2025-04-04 2:18 ` [PATCH v2 08/20] mm/mshare: flush all TLBs when updating PTEs in an mshare range Anthony Yznaga
2025-05-30 14:41 ` Jann Horn
2025-05-30 16:29 ` Anthony Yznaga
2025-05-30 17:46 ` Jann Horn
2025-05-30 22:47 ` Anthony Yznaga
2025-04-04 2:18 ` [PATCH v2 09/20] sched/numa: do not scan msharefs vmas Anthony Yznaga
2025-04-04 2:18 ` [PATCH v2 10/20] mm: add mmap_read_lock_killable_nested() Anthony Yznaga
2025-04-04 2:18 ` [PATCH v2 11/20] mm: add and use unmap_page_range vm_ops hook Anthony Yznaga
2025-04-04 2:18 ` [PATCH v2 12/20] mm/mshare: prepare for page table sharing support Anthony Yznaga
2025-05-30 14:56 ` Jann Horn
2025-05-30 16:41 ` Anthony Yznaga
2025-06-02 15:26 ` Jann Horn
2025-06-02 22:02 ` Anthony Yznaga
2025-04-04 2:18 ` [PATCH v2 13/20] x86/mm: enable page table sharing Anthony Yznaga
2025-08-12 13:46 ` Yongting Lin [this message]
2025-08-12 17:12 ` Anthony Yznaga
2025-08-18 9:44 ` Yongting Lin
2025-08-20 1:32 ` Anthony Yznaga
2025-04-04 2:18 ` [PATCH v2 14/20] mm: create __do_mmap() to take an mm_struct * arg Anthony Yznaga
2025-04-04 2:18 ` [PATCH v2 15/20] mm: pass the mm in vma_munmap_struct Anthony Yznaga
2025-04-04 2:18 ` [PATCH v2 16/20] mm/mshare: Add an ioctl for mapping objects in an mshare region Anthony Yznaga
2025-04-04 2:18 ` [PATCH v2 17/20] mm/mshare: Add an ioctl for unmapping " Anthony Yznaga
2025-04-04 2:19 ` [PATCH v2 18/20] mm/mshare: provide a way to identify an mm as an mshare host mm Anthony Yznaga
2025-04-04 2:19 ` [PATCH v2 19/20] mm/mshare: get memcg from current->mm instead of mshare mm Anthony Yznaga
2025-04-04 2:19 ` [PATCH v2 20/20] mm/mshare: associate a mem cgroup with an mshare file Anthony Yznaga
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250812134655.68614-1-linyongting@bytedance.com \
--to=linyongting@bytedance.com \
--cc=akpm@linux-foundation.org \
--cc=andreyknvl@gmail.com \
--cc=anthony.yznaga@oracle.com \
--cc=arnd@arndb.de \
--cc=brauner@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=dave.hansen@intel.com \
--cc=david@redhat.com \
--cc=ebiederm@xmission.com \
--cc=khalid@kernel.org \
--cc=linux-arch@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=luto@kernel.org \
--cc=markhemm@googlemail.com \
--cc=maz@kernel.org \
--cc=mhiramat@kernel.org \
--cc=neilb@suse.de \
--cc=pcc@google.com \
--cc=rostedt@goodmis.org \
--cc=vasily.averin@linux.dev \
--cc=viro@zeniv.linux.org.uk \
--cc=willy@infradead.org \
--cc=xhao@linux.alibaba.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).