From: "Edgecombe, Rick P" <rick.p.edgecombe@intel.com>
To: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"dave.hansen@linux.intel.com" <dave.hansen@linux.intel.com>
Cc: "Liam.Howlett@oracle.com" <Liam.Howlett@oracle.com>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
"ljs@kernel.org" <ljs@kernel.org>,
"surenb@google.com" <surenb@google.com>,
"vbabka@kernel.org" <vbabka@kernel.org>,
"shakeel.butt@linux.dev" <shakeel.butt@linux.dev>,
"akpm@linux-foundation.org" <akpm@linux-foundation.org>
Subject: Re: [PATCH 6/6] x86/mm: Avoid mmap lock for shadow stack pop fast path
Date: Mon, 4 May 2026 23:15:28 +0000 [thread overview]
Message-ID: <a46738ebd632ff046bc9b0d02a5382de5f2e9f2e.camel@intel.com> (raw)
In-Reply-To: <20260429182005.00BF70D8@davehans-spike.ostc.intel.com>
On Wed, 2026-04-29 at 11:20 -0700, Dave Hansen wrote:
> + vma = lock_vma_under_rcu_wait(current->mm, *ssp);
> + if (!vma)
> + return -EINVAL;
> +
> + if (!(vma->vm_flags & VM_SHADOW_STACK)) {
> + vma_end_read(vma);
> + return -EINVAL;
> + }
> +
> + err = get_shstk_data(&token_addr, (unsigned long __user *)*ssp);
Unfortunately, I think it won't work for the shadow stack case with the user
access. I get this splat from the shadow stack selftests:
======================================================
WARNING: possible circular locking dependency detected
7.1.0-rc1+ #2936 Not tainted
------------------------------------------------------
test_shadow_sta/930 is trying to acquire lock:
ff32a05fbc6a1008 (&mm->mmap_lock){++++}-{4:4}, at: __might_fault+0x3c/0x80
but task is already holding lock:
ff32a05f4caf3c48 (vm_lock){++++}-{0:0}, at: lock_vma_under_rcu+0xaf/0x2e0
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (vm_lock){++++}-{0:0}:
lock_acquire+0xbd/0x2f0
__vma_start_exclude_readers+0x8d/0x1e0
__vma_start_write+0x56/0xe0
vma_expand+0x7e/0x390
relocate_vma_down+0x126/0x220
setup_arg_pages+0x269/0x430
load_elf_binary+0x3d1/0x1840
bprm_execve+0x2cf/0x730
kernel_execve+0xf6/0x160
kernel_init+0xb9/0x1c0
ret_from_fork+0x2eb/0x340
ret_from_fork_asm+0x1a/0x30
-> #0 (&mm->mmap_lock){++++}-{4:4}:
check_prev_add+0xf1/0xd00
__lock_acquire+0x14a8/0x1ac0
lock_acquire+0xbd/0x2f0
__might_fault+0x5b/0x80
restore_signal_shadow_stack+0xd6/0x270
__do_sys_rt_sigreturn+0xdf/0xf0
do_syscall_64+0x11c/0xf80
entry_SYSCALL_64_after_hwframe+0x77/0x7f
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
rlock(vm_lock);
lock(&mm->mmap_lock);
lock(vm_lock);
rlock(&mm->mmap_lock);
*** DEADLOCK ***
1 lock held by test_shadow_sta/930:
#0: ff32a05f4caf3c48 (vm_lock){++++}-{0:0}, at: lock_vma_under_rcu+0xaf/0x2e0
stack backtrace:
CPU: 18 UID: 0 PID: 930 Comm: test_shadow_sta Not tainted 7.1.0-rc1+ #2936
PREEMPT(full)
Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
Call Trace:
<TASK>
dump_stack_lvl+0x68/0xa0
print_circular_bug+0x2ca/0x400
check_noncircular+0x12f/0x150
? __lock_acquire+0x49c/0x1ac0
check_prev_add+0xf1/0xd00
? reacquire_held_locks+0xe4/0x200
__lock_acquire+0x14a8/0x1ac0
lock_acquire+0xbd/0x2f0
? __might_fault+0x3c/0x80
? lock_is_held_type+0xa0/0x120
? __might_fault+0x3c/0x80
__might_fault+0x5b/0x80
? __might_fault+0x3c/0x80
restore_signal_shadow_stack+0xd6/0x270
__do_sys_rt_sigreturn+0xdf/0xf0
do_syscall_64+0x11c/0xf80
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x40212f
Code: 61 00 00 e8 73 f1 ff ff 48 8b 05 4c 61 00 00 31 d2 48 0f 38 f6 10 48 8b
44 24 08 64 48 2b 08
RSP: 002b:00007ffc286fb208 EFLAGS: 00010202
RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00007ff628b187b0
RDX: 0000000000000000 RSI: 00000000066492a0 RDI: 0000000000000000
RBP: 00007ffc286fb360 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000202 R12: 0000000000000000
R13: 0000000000000001 R14: 00007ff628b6c000 R15: 0000000000406e18
I guess the problem is the lock ordering. Not sure if there is any slow path
avoidance details that could make this splat a false positive. But how about
this simpler munmap() case:
Shadow stack signal munmap()
------------------- --------
vma_start_read() (VM_SHADOW_STACK check)
mmap_write_lock()
mmap_read_lock() (user fault) <- deadlock
vma_start_write() <-deadlock
> +
> + vma_end_read(vma);
> +
> + if (err)
> + return err;
>
> /* Restore SSP aligned? */
> if (unlikely(!IS_ALIGNED(token_addr, 8)))
next prev parent reply other threads:[~2026-05-04 23:15 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-29 18:19 [PATCH 0/6] mm: Make per-VMA locks available in all builds Dave Hansen
2026-04-29 18:19 ` [PATCH 1/6] mm: Make per-VMA locks available universally Dave Hansen
2026-04-29 18:19 ` [PATCH 2/6] binder: Make shrinker rely solely on per-VMA lock Dave Hansen
2026-04-29 18:19 ` [PATCH 3/6] mm: Add RCU-based VMA lookup that waits for writers Dave Hansen
2026-04-29 18:20 ` [PATCH 4/6] binder: Remove mmap_lock fallback Dave Hansen
2026-04-29 18:20 ` [PATCH 5/6] tcp: Remove mmap_lock fallback path Dave Hansen
2026-04-29 18:20 ` [PATCH 6/6] x86/mm: Avoid mmap lock for shadow stack pop fast path Dave Hansen
2026-05-04 23:15 ` Edgecombe, Rick P [this message]
2026-04-29 18:22 ` [PATCH 0/6] mm: Make per-VMA locks available in all builds Dave Hansen
2026-04-30 8:11 ` Lorenzo Stoakes
2026-04-30 17:17 ` Suren Baghdasaryan
2026-04-30 17:20 ` Dave Hansen
2026-04-30 7:55 ` [syzbot ci] " syzbot ci
2026-04-30 16:59 ` Dave Hansen
[not found] ` <20260430072053.e0be1b431bcff02831f07e9d@linux-foundation.org>
2026-04-30 16:52 ` [PATCH 0/6] " Dave Hansen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a46738ebd632ff046bc9b0d02a5382de5f2e9f2e.camel@intel.com \
--to=rick.p.edgecombe@intel.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=dave.hansen@linux.intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ljs@kernel.org \
--cc=shakeel.butt@linux.dev \
--cc=surenb@google.com \
--cc=vbabka@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox