public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Borislav Petkov <bp@alien8.de>
To: Joerg Roedel <jroedel@suse.de>
Cc: Lai Jiangshan <jiangshanlai@gmail.com>,
	linux-kernel@vger.kernel.org, x86@kernel.org,
	Lai Jiangshan <laijs@linux.alibaba.com>,
	Andy Lutomirski <luto@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Oleg Nesterov <oleg@redhat.com>,
	"Chang S. Bae" <chang.seok.bae@intel.com>,
	Jan Kiszka <jan.kiszka@siemens.com>,
	Tom Lendacky <thomas.lendacky@amd.com>
Subject: Re: [PATCH 3/3] x86/sev: The code for returning to user space is also in syscall gap
Date: Tue, 18 Jan 2022 11:32:31 +0100	[thread overview]
Message-ID: <YeaXP51ClhnV8Xfd@zn.tnic> (raw)
In-Reply-To: <Ybxt0g+U11hZhbSh@suse.de>

On Fri, Dec 17, 2021 at 12:00:34PM +0100, Joerg Roedel wrote:
> On Fri, Dec 17, 2021 at 11:30:10AM +0100, Borislav Petkov wrote:
> > I audited the handful instructions in there and didn't find anything
> > that would cause a #VC...
> 
> If the hypervisor decides to mess with the code-page for this path
> while a CPU is executing it. This will cause a #VC on that CPU and that
> could hit in the syscall return path.

So I added a CPUID on that return path:

@@ -213,8 +213,11 @@ syscall_return_via_sysret:
 
        popq    %rdi
        popq    %rsp
+       cpuid


It results in the splat below. I.e., we're on the VC2 stack. We've
landed there because:

 * If entered from kernel-mode the return stack is validated first, and if it is
 * not safe to use (e.g. because it points to the entry stack) the #VC handler
 * will switch to a fall-back stack (VC2) and call a special handler function.

and what puts us there is, I think:

vc_switch_off_ist:
        if (!get_stack_info_noinstr(stack, current, &info) || info.type == STACK_TYPE_ENTRY ||
            info.type > STACK_TYPE_EXCEPTION_LAST)
                sp = __this_cpu_ist_top_va(VC2);

but I need to stare at this more later to figure it all out properly.

[    1.372783] Kernel panic - not syncing: Can't handle #VC exception from unsupported context: sp: 0xfffffe0000019f58, prev_sp: 0x7ffc79fd7e78, VC2 stack [0xfffffe0000018000:0xfffffe000001a000]
[    1.374828] CPU: 0 PID: 1 Comm: init Not tainted 5.16.0+ #6
[    1.375586] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
[    1.376553] Call Trace:
[    1.377030]  <#VC2>
[    1.377462]  dump_stack_lvl+0x48/0x5e
[    1.378038]  panic+0xfa/0x2c6
[    1.378570]  kernel_exc_vmm_communication+0x10e/0x160
[    1.379275]  asm_exc_vmm_communication+0x30/0x60
[    1.379934] RIP: 0010:syscall_return_via_sysret+0x28/0x2a
[    1.380669] Code: 00 00 41 5f 41 5e 41 5d 41 5c 5d 5b 5e 41 5a 41 59 41 58 58 5e 5a 5e 48 89 e7 65 48 8b 24 25 04 60 00 00 ff 77 28 ff 37 5f 5c <0f> a2 0f 01 f8 48 0f 07 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 00
[    1.384240] RSP: 0018:00007ffc79fd7e78 EFLAGS: 00010046
[    1.384977] RAX: 00005555e4b80000 RBX: 0000000000000001 RCX: 00007fc2d978ac17
[    1.385894] RDX: 0000000000000054 RSI: 00007fc2d9792e09 RDI: 0000000000000000
[    1.386816] RBP: 00007fc2d97724e0 R08: 00007ffc79fd9fe7 R09: 00007fc2d979ae88
[    1.387734] R10: 000000000000001c R11: 0000000000000246 R12: 00005555e448e040
[    1.388647] R13: 000000000000000b R14: 0000000000000000 R15: 00007ffc79fd8119
[    1.389559]  </#VC2>
[    1.391521] Kernel Offset: 0x7e00000 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffffbfffffff)
[    1.393015] ---[ end Kernel panic - not syncing: Can't handle #VC exception from unsupported context: sp: 0xfffffe0000019f58, prev_sp: 0x7ffc79fd7e78, VC2 stack [0xfffffe0000018000:0xfffffe000001a000] ]---


-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

  reply	other threads:[~2022-01-18 10:32 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-12-13  4:22 [PATCH 0/3] x86/entry: Fix 3 suspicious bugs Lai Jiangshan
2021-12-13  4:22 ` [PATCH 1/3] X86/db: Change __this_cpu_read() to this_cpu_read() in hw_breakpoint_active() Lai Jiangshan
2021-12-13 19:09   ` Borislav Petkov
2021-12-14  2:51     ` Lai Jiangshan
2021-12-14  9:33       ` Borislav Petkov
2021-12-13 19:46   ` Peter Zijlstra
2021-12-13  4:22 ` [PATCH 2/3] x86/hw_breakpoint: Add stack_canary to hw_breakpoints denylist Lai Jiangshan
2021-12-13 19:57   ` Peter Zijlstra
2021-12-13  4:22 ` [PATCH 3/3] x86/sev: The code for returning to user space is also in syscall gap Lai Jiangshan
2021-12-14 21:51   ` Borislav Petkov
2021-12-17 10:05     ` Joerg Roedel
2021-12-17 10:30       ` Borislav Petkov
2021-12-17 11:00         ` Joerg Roedel
2022-01-18 10:32           ` Borislav Petkov [this message]
2022-01-18 15:37             ` Lai Jiangshan
2022-04-12 13:00     ` Lai Jiangshan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YeaXP51ClhnV8Xfd@zn.tnic \
    --to=bp@alien8.de \
    --cc=chang.seok.bae@intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=hpa@zytor.com \
    --cc=jan.kiszka@siemens.com \
    --cc=jiangshanlai@gmail.com \
    --cc=jroedel@suse.de \
    --cc=laijs@linux.alibaba.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@kernel.org \
    --cc=mingo@redhat.com \
    --cc=oleg@redhat.com \
    --cc=tglx@linutronix.de \
    --cc=thomas.lendacky@amd.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox