From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefan Bader <stefan.bader@canonical.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
Bug 791850 <791850@bugs.launchpad.net>,
Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: Kernel 2.6.39+ hangs when running as HVM guest under Xen
Date: Mon, 8 Aug 2011 22:38:48 -0400 [thread overview]
Message-ID: <20110809023848.GB13905@dumpdata.com> (raw)
In-Reply-To: <4E3A9799.50503@canonical.com>
On Thu, Aug 04, 2011 at 02:59:05PM +0200, Stefan Bader wrote:
> Since kernel 2.6.39 we were experiencing strange hangs when booting those as HVM
> guests in Xen (similar hangs but different places when looking at CentOS 5.4 +
> Xen 3.4.3 as well as Xen 4.1 and a 3.0 based dom0). The problem only happens
> when running with more than one vcpu.
>
Hey Stefan,
We were all at the XenSummit and I think did not get to think about this at all.
Also the merge window openned so that ate a good chunk of time. Anyhow..
Is this related to this: http://marc.info/?i=4E4070B4.1020008@it-infrastrukturen.org ?
> I was able to examine some dumps[1] and it always seemed to be a weird
> situations. In one example (booting 3.0 HVM under Xen 3.4.3/2.6.18 dom0) the
> lockup always seemed to occur when the delayed mtrr init took place. Cpu#0
> seemed to have been starting the rendevouz (stop_cpu) but then been interrupted
> and the other (I was using vcpu=2 for simplicity) was idling somewhere else but
> had the mtrr
> rendevouz handler queued up (just seemed to never get started).
>
> Things seemed to indicate some IPI problem but to be sure I went to bisect when
> the problem started. I ended up with the following patch which, when reverted,
> allows me to bring up a 3.0 HVM guest with more than one CPU without any problems.
>
> commit 99bbb3a84a99cd04ab16b998b20f01a72cfa9f4f
> Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> Date: Thu Dec 2 17:55:10 2010 +0000
>
> xen: PV on HVM: support PV spinlocks and IPIs
>
> Initialize PV spinlocks on boot CPU right after native_smp_prepare_cpus
> (that switch to APIC mode and initialize APIC routing); on secondary
> CPUs on CPU_UP_PREPARE.
>
> Enable the usage of event channels to send and receive IPIs when
> running as a PV on HVM guest.
>
> Though I have not yet really understood why exactly this happens, I thought I
> post the results so far. It feels like either signalling an IPI through the
> eventchannel does not come through or goes to the wrong CPU. It did not seem to
> cause the exactly same place to fail. Like said, the 3.0 guest running in the
> CentOS dom0 was locking up early right after all CPUs were brought up. While
> during the bisect (using a kernel between 2.6.38 and .39-rc1) the lockup was later.
>
> Maybe someone has a clue immediately. I will dig a bit deeper in the dumps in
> the meantime. Looking at the description, which sounds like using event channels
Anything turned up?
> only was intended for PV on HVM guests, it is wrong in the first place to set
> the xen ipi functions on the HVM side...
On true HVM - sure, but on PVonHVM it sounds right.
>
> -Stefan
>
> [1] https://bugs.launchpad.net/ubuntu/+source/linux/+bug/791850
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
next prev parent reply other threads:[~2011-08-09 2:38 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-08-04 12:59 Kernel 2.6.39+ hangs when running as HVM guest under Xen Stefan Bader
2011-08-09 2:38 ` Konrad Rzeszutek Wilk [this message]
2011-08-09 14:54 ` Stefan Bader
2011-08-10 14:40 ` Stefan Bader
2011-08-17 13:15 ` Stefan Bader
2011-08-17 13:25 ` Stefan Bader
2011-08-17 13:37 ` Konrad Rzeszutek Wilk
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20110809023848.GB13905@dumpdata.com \
--to=konrad.wilk@oracle.com \
--cc=791850@bugs.launchpad.net \
--cc=stefan.bader@canonical.com \
--cc=stefano.stabellini@eu.citrix.com \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).