xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Alvin Starr <alvin@netvel.net>
To: Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Cc: Tim Deegan <tim@xen.org>,
	Andres Lagar-Cavilla <andres@lagarcavilla.org>,
	xen-users@lists.xenproject.org
Subject: Re: [Xen-users] nestedhvm.
Date: Tue, 20 May 2014 08:47:37 -0400	[thread overview]
Message-ID: <537B4EE9.9010407@netvel.net> (raw)
In-Reply-To: <1400576182.25175.7.camel@kazak.uk.xensource.com>

Not sure if it helps or not but the nested domUs have vcpus = 3 each.


On 05/20/2014 04:56 AM, Ian Campbell wrote:
> Adding xen-devel and some relevant maintainers.
>
>> On 05/19/2014 11:40 AM, Ian Campbell wrote:
>>> On Sun, 2014-05-18 at 08:02 -0400, Alvin Starr wrote:
>>>> I am trying to run nested hypervisors to do some openstack experiments.
>>>> I seem to be able to run xen-on-xen with no problems but if i try to run
>>>> kvm-on-xen the system seems to spontaneously reboot.
>>>> I get the same results with xen 4.3 or 4.4.
>>>> The dom0 is running fedora-20
>>>> The experiment environment is Centos6 with RDO
> On Mon, 2014-05-19 at 23:53 -0400, Alvin Starr wrote:
>> Here is the serial port output.
>> boot log along with panic.
> Which contains:
>          (XEN) mm locking order violation: 260 > 222
>          (XEN) Xen BUG at mm-locks.h:118
> (full stack trace is below)
>
> That lead me to
> http://lists.xen.org/archives/html/xen-devel/2013-02/msg01372.html but
> not to a patch. Was there one? I've grepped the git logs for hints but
> not found it...
>
> Ian.
>
> (XEN) ----[ Xen-4.3.2  x86_64  debug=n  Not tainted ]----
> (XEN) CPU:    23
> (XEN) RIP:    e008:[<ffff82c4c01ec7bb>] p2m_flush_table+0x1db/0x1f0
> (XEN) RFLAGS: 0000000000010286   CONTEXT: hypervisor
> (XEN) rax: ffff8308299ed020   rbx: ffff831835cb0540   rcx: 0000000000000000
> (XEN) rdx: ffff8308299e0000   rsi: 000000000000000a   rdi: ffff82c4c027d658
> (XEN) rbp: ffff82c4c031b648   rsp: ffff8308299e7998   r8:  0000000000000004
> (XEN) r9:  0000000000000000   r10: ffff82c4c022ce64   r11: 0000000000000003
> (XEN) r12: ffff83202cf99000   r13: 0000000000000000   r14: 0000000000000009
> (XEN) r15: 0000000000000000   cr0: 0000000080050033   cr4: 00000000000406f0
> (XEN) cr3: 0000001834178000   cr2: 0000000000000000
> (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
> (XEN) Xen stack trace from rsp=ffff8308299e7998:
> (XEN)    0000000000000008 ffff83202cf99000 0000000000000006 0000000000000000
> (XEN)    0000000000000009 ffff82c4c01f0431 0000000000000000 ffff831835cb0010
> (XEN)    0000000000371600 ffff82c4c01f1dc5 2000000000000000 00000000016e8400
> (XEN)    ffff831836e38c58 ffff8308299e7a08 0000000001836e38 ffff831836e38000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 ffff831835cb0010
> (XEN)    00000000000ee200 0000000000000000 0000000000000200 ffff831835cb0010
> (XEN)    0000000000000001 0000000000371600 0000000000000200 ffff82c4c01ecf50
> (XEN)    ffff83202cf99000 0000000700000006 0000000001836e37 ffff831835cb0010
> (XEN)    ffff83202cf99000 ffff8308299e7af0 0000000000000200 0000000000371600
> (XEN)    00000000016e8400 ffff82c4c01f3c8f ffff8308299e7aec 0000000035cb0010
> (XEN)    0000000000000001 00000000016e8400 0000000000000200 ffff82c400000007
> (XEN)    ffff83202cf99000 0000000700000000 ffff83040e4402c4 ffff831835cb0010
> (XEN)    0000000000000009 0000000000f9f600 00000000000ee200 0000000000000200
> (XEN)    ffff83202cf99000 ffff82c4c01f6019 00000000000ee200 ffff830800000200
> (XEN)    ffff831835cb04f8 ffff8308299e7f18 0000000000000003 ffff8308299e7c68
> (XEN)    0000000000000010 ffff82c4c01bcf83 ffff8308299e7ba0 ffff82c4c01f1222
> (XEN)    6000001800000000 ffffffff810402c4 ffff8308299e7c50 ffff8300aebdd000
> (XEN)    ffff8308299e7c50 ffff8300aebdd000 0000000000000000 ffff82c4c01c85dc
> (XEN)    ffffffff81039e63 0a9b00100000000f 00000000ffffffff 0000000000000000
> (XEN)    00000000ffffffff 0000000000000000 00000000ffffffff ffff831835cb0010
> (XEN) Xen call trace:
> (XEN)    [<ffff82c4c01ec7bb>] p2m_flush_table+0x1db/0x1f0
> (XEN)    [<ffff82c4c01f0431>] p2m_flush_nestedp2m+0x21/0x30
> (XEN)    [<ffff82c4c01f1dc5>] p2m_set_entry+0x565/0x650
> (XEN)    [<ffff82c4c01ecf50>] set_p2m_entry+0x90/0x130
> (XEN)    [<ffff82c4c01f3c8f>] p2m_pod_zero_check_superpage+0x21f/0x460
> (XEN)    [<ffff82c4c01f6019>] p2m_pod_demand_populate+0x699/0x890
> (XEN)    [<ffff82c4c01bcf83>] hvm_emulate_one+0xc3/0x1f0
> (XEN)    [<ffff82c4c01f1222>] p2m_gfn_to_mfn+0x392/0x3c0
> (XEN)    [<ffff82c4c01c85dc>] handle_mmio+0x7c/0x1e0
> (XEN)    [<ffff82c4c01f10e1>] p2m_gfn_to_mfn+0x251/0x3c0
> (XEN)    [<ffff82c4c01eca58>] __get_gfn_type_access+0x68/0x210
> (XEN)    [<ffff82c4c01c1843>] hvm_hap_nested_page_fault+0xc3/0x510
> (XEN)    [<ffff82c4c011a447>] csched_vcpu_wake+0x367/0x580
>
>>>> Any hints on what the problem may be or a good place to start to look to
>>>> diagnose it?
>>> You'll need to gather some logs I think. Ideally a serial console log or
>>> if not try using "noreboot" on your hypervisor command line to try and
>>> see the last messages before it reboots.
>>>
>>> Ian.
>>>
>>>
>>
>


-- 
Alvin Starr                   ||   voice: (905)513-7688
Netvel Inc.                   ||   Cell:  (416)806-0133
alvin@netvel.net              ||

  reply	other threads:[~2014-05-20 12:47 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <5378A14F.5@netvel.net>
     [not found] ` <1400514018.6114.19.camel@kazak.uk.xensource.com>
     [not found]   ` <537AD1A0.50702@netvel.net>
2014-05-20  8:56     ` [Xen-users] nestedhvm Ian Campbell
2014-05-20 12:47       ` Alvin Starr [this message]
2014-05-20 16:37       ` Tim Deegan
2014-05-20 16:59         ` Andres Lagar-Cavilla
2014-05-20 17:32           ` Alvin Starr
2014-05-21  2:05           ` Alvin Starr
2014-05-21  9:09           ` Ian Campbell
2014-05-21 11:20             ` Alvin Starr
2014-05-26 19:20             ` Alvin Starr

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=537B4EE9.9010407@netvel.net \
    --to=alvin@netvel.net \
    --cc=Ian.Campbell@citrix.com \
    --cc=andres@lagarcavilla.org \
    --cc=tim@xen.org \
    --cc=xen-devel@lists.xen.org \
    --cc=xen-users@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).