From: "Marek Marczykowski-Górecki" <marmarek@invisiblethingslab.com>
To: xen-devel <xen-devel@lists.xen.org>,
kvm@vger.kernel.org, Joerg Roedel <joro@8bytes.org>
Subject: Re: Xen inside KVM on AMD: Linux HVM/PVH crashes on AP bring up
Date: Mon, 14 May 2018 00:03:56 +0200 [thread overview]
Message-ID: <20180513220356.GA2731@mail-itl> (raw)
In-Reply-To: <20180416151403.GA2208@mail-itl>
[-- Attachment #1.1: Type: text/plain, Size: 2186 bytes --]
On Mon, Apr 16, 2018 at 05:14:03PM +0200, Marek Marczykowski-Górecki wrote:
> Hi,
>
> I' trying to boot Linux PVH on Xen, which is running inside KVM on AMD
> hardware. As soon as secondary CPU is starting, domain crashes.
> Strangely, without printing any related messages on the console. The
> last message is "x86: Booting SMP configuration:".
> This happens for both PVH and HVM with 2 vcpus. PVH/HVM domains with 1
> vcpu works fine(*), as well as PV domains with multiple vcpus.
>
> Using gdbsx I've managed to get the point where it crashes:
>
> (gdb) f 12
> #12 0xffffffff81025101 in do_error_trap (regs=0xffffc9000037fe78, error_code=-2401053088876204019,
> str=0x40 <irq_stack_union+64> <error: Cannot access memory at address 0x40>, trapnr=6, signr=-2)
> at arch/x86/kernel/traps.c:302
> 302 arch/x86/kernel/traps.c: No such file or directory.
> (gdb) p/x *regs
> $8 = {r15 = 0x0, r14 = 0x0, r13 = 0x0, r12 = 0x0, bp = 0x1, bx = 0xffff88007fd0f040, r11 = 0x0,
> r10 = 0x0, r9 = 0x38, r8 = 0x0, ax = 0xffffffe4, cx = 0xffffffff82251e68, dx = 0x0, si = 0x96,
> di = 0x82, orig_ax = 0xffffffffffffffff, ip = 0xffffffff81036bd3, cs = 0x10, flags = 0x10086,
> sp = 0xffffc9000037ff20, ss = 0x0}
> (gdb) info symbol 0xffffffff81036bd3
> identify_secondary_cpu + 83 in section .text
>
> It is BUG_ON(c == &boot_cpu_data). If I read it correctly, "c" is 0x82,
> which indeed isn't &boot_cpu_data (0xffffffff8234fe00).
>
> Any idea?
>
> Version info:
> Linux (L0, KVM): 4.4.114-42 (OpenSUSE Leap 42.3)
> Xen (L1): 4.8.3
> Linux dom0 (L1): 4.14.18
> Linux guest: 4.14.18
Upgrading L0 kernel to 4.16.8 and guest (L2) kernel to 4.15.6 fixed this
problem. Not sure if L0 kernel upgrade was necessary (on its own didn't
helped), but the latter one definitely was.
> (*) besides some 20s+ delay on flush_work in deferred_probe_initcall,
> before actually calling deferred_probe_work_func.
--
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
[-- Attachment #2: Type: text/plain, Size: 157 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
prev parent reply other threads:[~2018-05-13 22:03 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-04-16 15:14 Xen inside KVM on AMD: Linux HVM/PVH crashes on AP bring up Marek Marczykowski-Górecki
2018-05-13 22:03 ` Marek Marczykowski-Górecki [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180513220356.GA2731@mail-itl \
--to=marmarek@invisiblethingslab.com \
--cc=joro@8bytes.org \
--cc=kvm@vger.kernel.org \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).