xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Stefan Bader <stefan.bader@canonical.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Bug 791850 <791850@bugs.launchpad.net>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: Kernel 2.6.39+ hangs when running as HVM guest under Xen
Date: Wed, 10 Aug 2011 09:40:07 -0500	[thread overview]
Message-ID: <4E429847.1060306@canonical.com> (raw)
In-Reply-To: <4E414A21.8010902@canonical.com>

On 09.08.2011 09:54, Stefan Bader wrote:
> On 08.08.2011 21:38, Konrad Rzeszutek Wilk wrote:
>> On Thu, Aug 04, 2011 at 02:59:05PM +0200, Stefan Bader wrote:
>>> Since kernel 2.6.39 we were experiencing strange hangs when booting those as HVM
>>> guests in Xen (similar hangs but different places when looking at CentOS 5.4 +
>>> Xen 3.4.3 as well as Xen 4.1 and a 3.0 based dom0). The problem only happens
>>> when running with more than one vcpu.
>>>
>>
>> Hey Stefan,
>>
>> We were all at the XenSummit and I think did not get to think about this at all.
>> Also the merge window openned so that ate a good chunk of time. Anyhow..
>>
> 
> Ah, right. Know the feeling. :) I am travelling this week, too.
> 
>> Is this related to this: http://marc.info/?i=4E4070B4.1020008@it-infrastrukturen.org ?
>>
> 
> On a quick glance it seems to be different. What I was looking at was dom0
> setups which worked for HVM guests up to kernel 2.6.38. And locked up at some
> point when a guest kernel after that was started in SMP mode.
> 
>>> I was able to examine some dumps[1] and it always seemed to be a weird
>>> situations. In one example (booting 3.0 HVM under Xen 3.4.3/2.6.18 dom0) the
>>> lockup always seemed to occur when the delayed mtrr init took place. Cpu#0
>>> seemed to have been starting the rendevouz (stop_cpu) but then been interrupted
>>> and the other (I was using vcpu=2 for simplicity) was idling somewhere else but
>>> had the mtrr
>>> rendevouz handler queued up (just seemed to never get started).
>>>
>>> Things seemed to indicate some IPI problem but to be sure I went to bisect when
>>> the problem started. I ended up with the following patch which, when reverted,
>>> allows me to bring up a 3.0 HVM guest with more than one CPU without any problems.
>>>
>>> commit 99bbb3a84a99cd04ab16b998b20f01a72cfa9f4f
>>> Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>>> Date:   Thu Dec 2 17:55:10 2010 +0000
>>>
>>>     xen: PV on HVM: support PV spinlocks and IPIs
>>>
>>>     Initialize PV spinlocks on boot CPU right after native_smp_prepare_cpus
>>>     (that switch to APIC mode and initialize APIC routing); on secondary
>>>     CPUs on CPU_UP_PREPARE.
>>>
>>>     Enable the usage of event channels to send and receive IPIs when
>>>     running as a PV on HVM guest.
>>>
>>> Though I have not yet really understood why exactly this happens, I thought I
>>> post the results so far. It feels like either signalling an IPI through the
>>> eventchannel does not come through or goes to the wrong CPU. It did not seem to
>>> cause the exactly same place to fail. Like said, the 3.0 guest running in the
>>> CentOS dom0 was locking up early right after all CPUs were brought up. While
>>> during the bisect (using a kernel between 2.6.38 and .39-rc1) the lockup was later.
>>>
>>> Maybe someone has a clue immediately. I will dig a bit deeper in the dumps in
>>> the meantime. Looking at the description, which sounds like using event channels
>>
>> Anything turned up?
> 
>>From the data structures everything seems to be set up correctly.
> 
>>> only was intended for PV on HVM guests, it is wrong in the first place to set
>>> the xen ipi functions on the HVM side...
>>
>> On true HVM - sure, but on PVonHVM it sounds right.
> 
> Though exactly that seems to be what is happening. So I am looking at the guest
> which is started as a HVM guest and the patch is modifying ipi delivery to be
> tried as hypervisor calls instead of using the native apic method.
>

As a bit of more information, it seems after upgrading the hypervisor to xen
4.1.1 (with the same 3.0 dom0 kernel) the HVM guest (3.0 kernel) boots past the
initial hang to end up having ata timeouts on the emulated IDE controller. *sigh*

So there also seems to be a dependency on the hypervisor code. 3.4.3 and 4.1.0
at least seemed to have problems, 4.1.1 has a different one.

Sounds a bit like some later versions of the hypervisor would handle the IPI
calls while older ones do not...

>>>
>>> -Stefan
>>>
>>> [1] https://bugs.launchpad.net/ubuntu/+source/linux/+bug/791850
>>>
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xensource.com
>>> http://lists.xensource.com/xen-devel
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel

  reply	other threads:[~2011-08-10 14:40 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-08-04 12:59 Kernel 2.6.39+ hangs when running as HVM guest under Xen Stefan Bader
2011-08-09  2:38 ` Konrad Rzeszutek Wilk
2011-08-09 14:54   ` Stefan Bader
2011-08-10 14:40     ` Stefan Bader [this message]
2011-08-17 13:15 ` Stefan Bader
2011-08-17 13:25   ` Stefan Bader
2011-08-17 13:37     ` Konrad Rzeszutek Wilk

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4E429847.1060306@canonical.com \
    --to=stefan.bader@canonical.com \
    --cc=791850@bugs.launchpad.net \
    --cc=konrad.wilk@oracle.com \
    --cc=stefano.stabellini@eu.citrix.com \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).