From: George Dunlap <george.dunlap@eu.citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: Linux guests hitting VCPUOP_stop_singleshot_timer returning -EINVAL.
Date: Thu, 9 May 2013 15:52:16 +0100 [thread overview]
Message-ID: <518BB820.4020408@eu.citrix.com> (raw)
In-Reply-To: <20130508170227.GA32140@phenom.dumpdata.com>
On 08/05/13 18:02, Konrad Rzeszutek Wilk wrote:
> Hey George,
>
> The issue I am hitting happens quite often but not always. And I
> hope you can help me understand the bootup part of PVHVM guests
> which I think is part of the problem. I can reproduce this
> way back to Xen 4.1 so I think this has been in the hypervisor
> for some time.
>
> First of the guest starts with just one VCPU:
>
> builder='hvm'
> device_model_version = 'qemu-xen-traditional'
> disk = [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
> memory = 2048
> boot="d"
> maxvcpus=32
> vcpus=1
> serial='pty'
> vnclisten="0.0.0.0"
> name="latest"
> vif = [ 'mac=00:0F:4B:00:00:68, bridge=switch' ]
>
>
> And hit the issue when I try to add more VCPUs: xl vcpu-set latest 16
> which triggers this bug in the Linux code (see below for the code):
> ..
>
> May 8 16:03:48 (none) udevd-work[2153]: error opening ATTR{/sys/devices/system/cpu/cpu1/online} for writing: No such file or directory
>
>
> [ 84.585905] CPU 1 got hotplugged
>
> [ 84.590192] installing Xen timer for CPU 1
>
> [ 84.596371] SMP alternatives: lockdep: fixing up alternatives
>
> [ 84.603560] SMP alternatives: switching to SMP code
>
> [ 84.619508] smpboot: Booting Node 0 Processor 1 APIC 0x8
>
> [ 84.639766] ------------[ cut here ]------------
>
> [ 84.639766] WARNING: at /home/konrad/ssd/konrad/linux/arch/x86/xen/time.c:336 xen_vcpuop_set_mode+0xc2/0xd0()
>
> [ 84.639766] Hardware name: HVM domU
>
> [ 84.639766] VCPUOP_stop_singleshot_timer on CPU1 ret: -22
>
> [ 84.639766] Modules linked in: dm_multipath dm_mod iscsi_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi libcrc32c crc32c sg sr_mod cdrom ata_generic crc32c_intel ata_piix libata scsi_mod xen_blkfront xen_netfront fb_sys_fops sysimgblt sysfillrect syscopyarea xen_kbdfront xenfs xen_privcmd
>
> [ 84.639766] Pid: 0, comm: swapper/1 Not tainted 3.9.0upstream-00022-g49c1bf1-dirty #6
>
> [ 84.639766] Call Trace:
>
> [ 84.639766] [<ffffffff8109261a>] warn_slowpath_common+0x7a/0xc0
>
> [ 84.639766] [<ffffffff81092701>] warn_slowpath_fmt+0x41/0x50
>
> [ 84.639766] [<ffffffff810e8ae4>] ? tick_notify+0x114/0x420
>
> [ 84.639766] [<ffffffff81044fe2>] xen_vcpuop_set_mode+0xc2/0xd0
>
> [ 84.639766] [<ffffffff810e8385>] clockevents_set_mode+0x25/0x70
>
> [ 84.639766] [<ffffffff810e83e6>] clockevents_shutdown+0x16/0x30
>
> [ 84.639766] [<ffffffff810e84b7>] clockevents_exchange_device+0xb7/0x110
>
> [ 84.639766] [<ffffffff810e8ae4>] ? tick_notify+0x114/0x420
>
> [ 84.639766] [<ffffffff810e8b99>] tick_notify+0x1c9/0x420
>
> [ 84.639766] [<ffffffff810e8221>] ? clockevents_register_device+0x31/0x170
>
> [ 84.639766] [<ffffffff8169dddd>] notifier_call_chain+0x4d/0x70
>
> [ 84.639766] [<ffffffff810be851>] raw_notifier_call_chain+0x11/0x20
>
> [ 84.639766] [<ffffffff810e82d0>] clockevents_register_device+0xe0/0x170
>
> [ 84.639766] [<ffffffff8104507c>] xen_setup_cpu_clockevents+0x2c/0x50
>
> [ 84.639766] [<ffffffff810450b6>] xen_hvm_setup_cpu_clockevents+0x16/0x20
>
> [ 84.639766] [<ffffffff8168d4af>] start_secondary+0x1ea/0x1f9
>
> [ 84.639766] ---[ end trace f218984223a7067d ]---
>
> [ 84.639766] ------------[ cut here ]------------
>
> [ 84.639766] WARNING: at /home/konrad/ssd/konrad/linux/arch/x86/xen/time.c:338 xen_vcpuop_set_mode+0x9f/0xd0()
>
> [ 84.639766] Hardware name: HVM domU
>
> [ 84.639766] VCPUOP_stop_periodic_timer on CPU1 ret: -22
>
> [ 84.639766] Modules linked in: dm_multipath dm_mod iscsi_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi libcrc32c crc32c sg sr_mod cdrom ata_generic crc32c_intel ata_piix libata scsi_mod xen_blkfront xen_netfront fb_sys_fops sysimgblt sysfillrect syscopyarea xen_kbdfront xenfs xen_privcmd
>
> [ 84.639766] Pid: 0, comm: swapper/1 Tainted: G W 3.9.0upstream-00022-g49c1bf1-dirty #6
>
> [ 84.639766] Call Trace:
>
> [ 84.639766] [<ffffffff8109261a>] warn_slowpath_common+0x7a/0xc0
>
> [ 84.639766] [<ffffffff81092701>] warn_slowpath_fmt+0x41/0x50
>
> [ 84.639766] [<ffffffff810e8ae4>] ? tick_notify+0x114/0x420
>
> [ 84.639766] [<ffffffff81044fbf>] xen_vcpuop_set_mode+0x9f/0xd0
>
> [ 84.639766] [<ffffffff810e8385>] clockevents_set_mode+0x25/0x70
>
> [ 84.639766] [<ffffffff810e83e6>] clockevents_shutdown+0x16/0x30
>
> [ 84.639766] [<ffffffff810e84b7>] clockevents_exchange_device+0xb7/0x110
>
> [ 84.639766] [<ffffffff810e8ae4>] ? tick_notify+0x114/0x420
>
> [ 84.639766] [<ffffffff810e8b99>] tick_notify+0x1c9/0x420
>
> [ 84.639766] [<ffffffff810e8221>] ? clockevents_register_device+0x31/0x170
>
> [ 84.639766] [<ffffffff8169dddd>] notifier_call_chain+0x4d/0x70
>
> [ 84.639766] [<ffffffff810be851>] raw_notifier_call_chain+0x11/0x20
>
> [ 84.639766] [<ffffffff810e82d0>] clockevents_register_device+0xe0/0x170
>
> [ 84.639766] [<ffffffff8104507c>] xen_setup_cpu_clockevents+0x2c/0x50
>
> [ 84.639766] [<ffffffff810450b6>] xen_hvm_setup_cpu_clockevents+0x16/0x20
>
> [ 84.639766] [<ffffffff8168d4af>] start_secondary+0x1ea/0x1f9
>
> [ 84.639766] ---[ end trace f218984223a7067e ]---
>
> [ 84.639766] ------------[ cut here ]------------
>
> [ 84.639766] kernel BUG at /home/konrad/ssd/konrad/linux/arch/x86/xen/time.c:340!
>
> [ 84.639766] invalid opcode: 0000 [#1] SMP
>
> [ 84.639766] Modules linked in: dm_multipath dm_mod iscsi_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi libcrc32c crc32c sg sr_mod cdrom ata_generic crc32c_intel ata_piix libata scsi_mod xen_blkfront xen_netfront fb_sys_fops sysimgblt sysfillrect syscopyarea xen_kbdfront xenfs xen_privcmd
>
> [ 84.639766] CPU 1
>
> [ 84.639766] Pid: 0, comm: swapper/1 Tainted: G W 3.9.0upstream-00022-g49c1bf1-dirty #6 Xen HVM domU
>
> [ 84.639766] RIP: 0010:[<ffffffff81044fbf>] [<ffffffff81044fbf>] xen_vcpuop_set_mode+0x9f/0xd0
>
> [ 84.639766] RSP: 0000:ffff880073c6bd98 EFLAGS: 00010092
>
> [ 84.639766] RAX: 0000000000000024 RBX: 0000000000000001 RCX: 0000000000000000
>
> [ 84.639766] RDX: ffff880073c68300 RSI: 0000000000000000 RDI: 0000000000000009
>
> [ 84.639766] RBP: ffff880073c6bdb8 R08: 0000000000000001 R09: 0000000000000000
>
> [ 84.639766] R10: 0000000000000258 R11: 0000000000000000 R12: 00000000ffffffea
>
> [ 84.639766] R13: 0000000000000001 R14: 0000000000000000 R15: 0000000000000082
>
> [ 84.639766] FS: 0000000000000000(0000) GS:ffff880074220000(0000) knlGS:0000000000000000
>
> [ 84.639766] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>
> [ 84.639766] CR2: 0000000000000000 CR3: 0000000001c0c000 CR4: 00000000000406e0
>
> [ 84.639766] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
>
> [ 84.639766] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
>
> [ 84.639766] Process swapper/1 (pid: 0, threadinfo ffff880073c6a000, task ffff880073c68300)
>
> [ 84.639766] Stack:
>
> [ 84.639766] 0000000000000000 ffff88007422be40 0000000000000001 0000000000000082
>
> [ 84.639766] ffff880073c6bdd8 ffffffff810e8385 ffff88007422be40 ffff88007422be40
>
> [ 84.639766] ffff880073c6bdf8 ffffffff810e83e6 ffff880073c6bdf8 0000000000000000
>
> [ 84.639766] Call Trace:
>
> [ 84.639766] [<ffffffff810e8385>] clockevents_set_mode+0x25/0x70
>
> [ 84.639766] [<ffffffff810e83e6>] clockevents_shutdown+0x16/0x30
>
> [ 84.639766] [<ffffffff810e84b7>] clockevents_exchange_device+0xb7/0x110
>
> [ 84.639766] [<ffffffff810e8ae4>] ? tick_notify+0x114/0x420
>
> [ 84.639766] [<ffffffff810e8b99>] tick_notify+0x1c9/0x420
>
> [ 84.639766] [<ffffffff810e8221>] ? clockevents_register_device+0x31/0x170
>
> [ 84.639766] [<ffffffff8169dddd>] notifier_call_chain+0x4d/0x70
>
> [ 84.639766] [<ffffffff810be851>] raw_notifier_call_chain+0x11/0x20
>
> [ 84.639766] [<ffffffff810e82d0>] clockevents_register_device+0xe0/0x170
>
> [ 84.639766] [<ffffffff8104507c>] xen_setup_cpu_clockevents+0x2c/0x50
>
> [ 84.639766] [<ffffffff810450b6>] xen_hvm_setup_cpu_clockevents+0x16/0x20
>
> [ 84.639766] [<ffffffff8168d4af>] start_secondary+0x1ea/0x1f9
>
> [ 84.639766] Code: 00 48 c7 c7 f8 bc 9b 81 e8 bf d6 04 00 eb c9 89 d9 48 c7 c2 98 bd 9b 81 be 52 01 00 00 48 c7 c7 f8 bc 9b 81 31 c0 e8 01 d7 04 00 <0f> 0b eb fe 41 89 c0 89 d9 48 c7 c2 68 bd 9b 81 be 50 01 00 00
>
> [ 84.639766] RIP [<ffffffff81044fbf>] xen_vcpuop_set_mode+0x9f/0xd0
>
> [ 84.639766] RSP <ffff880073c6bd98>
>
> [ 84.639766] ---[ end trace f218984223a7067f ]---
>
> [ 84.639766] Kernel panic - not syncing: Attempted to kill the idle task!
>
> Parsing config from /mnt/lab/latest/vm.cfg
> Daemon running with PID 7341
>
> Here is the patch that I applied on v3.9 (along with some other patches
> to fix the VCPU hotplug path in there):
>
> diff --git a/arch/x86/xen/time.c b/arch/x86/xen/time.c
> index 3d88bfd..d2d1248 100644
> --- a/arch/x86/xen/time.c
> +++ b/arch/x86/xen/time.c
> @@ -319,7 +319,7 @@ static void xen_vcpuop_set_mode(enum clock_event_mode mode,
> struct clock_event_device *evt)
> {
> int cpu = smp_processor_id();
> -
> + int rc;
> switch (mode) {
> case CLOCK_EVT_MODE_PERIODIC:
> WARN_ON(1); /* unsupported */
> @@ -332,9 +332,13 @@ static void xen_vcpuop_set_mode(enum clock_event_mode mode,
>
> case CLOCK_EVT_MODE_UNUSED:
> case CLOCK_EVT_MODE_SHUTDOWN:
> - if (HYPERVISOR_vcpu_op(VCPUOP_stop_singleshot_timer, cpu, NULL) ||
> - HYPERVISOR_vcpu_op(VCPUOP_stop_periodic_timer, cpu, NULL))
> + rc = HYPERVISOR_vcpu_op(VCPUOP_stop_singleshot_timer, cpu, NULL);
> + WARN(rc, "VCPUOP_stop_singleshot_timer on CPU%d ret: %d\n", cpu, rc);
> + rc |= HYPERVISOR_vcpu_op(VCPUOP_stop_periodic_timer, cpu, NULL);
> + WARN(rc, "VCPUOP_stop_periodic_timer on CPU%d ret: %d\n", cpu, rc);
> + if (rc) {
> BUG();
> + }
> break;
> case CLOCK_EVT_MODE_RESUME:
> break;
>
>
> If you look in the kernel it says that it does on VCPU1, but the hypervisor
> (this is with another debug patch - this time attached), you can see:
>
> .. snip..
> (XEN) irq.c:375: Dom5 callback via changed to Direct Vector 0xf3
> (XEN) irq.c:270: Dom5 PCI link 0 changed 5 -> 0
> (XEN) irq.c:270: Dom5 PCI link 1 changed 10 -> 0
> (XEN) irq.c:270: Dom5 PCI link 2 changed 11 -> 0
> (XEN) irq.c:270: Dom5 PCI link 3 changed 5 -> 0
> ===> (XEN) d5 [`\x06\x04:-137435614851072] on VCPU4 <====
> (XEN) d5v0: timer [singleshot] in 385066us vcpu_singleshot_timer_fn+0/0xb
> (XEN) d5v0 on CPU-1 - - 0
> (XEN) d5v0: poll:0 not pending -
> (XEN) d5v1 on CPU-1 - - 0
> (XEN) d5v1: poll:0 pending pending sel
So the problem is that in Linux, smp_processor_id() seems to get "1",
but in Xen, "current->vcpu_id" gets 4?
Is this bit of code *supposed* to be running on cpu 1 (presumably as
part of routine maintenance) or on cpu 4 (as part of bringing up the cpu)?
I'm not sure really how to debug that -- where does Linux's
smp_processor_id() macro get the value from? Are you sure that it's
actually set up properly at this point in bringing up the cpu?
> And if I run xentrace and xenalyze it confirms:
>
> 0.097606174 --x- d32767v2 runstate_change d7v4 offline->runnable
> Creating vcpu 4 for dom 7
> ] 0.097606174 --x- d32767v2 22006(2:2:6) 1 [ 3 ]
> ] 0.097607074 --x- d32767v2 28004(2:8:4) 2 [ 7 4 ]
> 0.097607786 ---x d32767v3 pm_idle_end c1
> 0.097607995 --x- d32767v2 runstate_change d7v0 offline->runnable
> ] 0.097607995 --x- d32767v2 22006(2:2:6) 1 [ 1 ]
> ] 0.097608930 --x- d32767v2 28004(2:8:4) 2 [ 7 0 ]
> ] 0.097609486 ---x d32767v3 2800e(2:8:e) 2 [ 7fff 9a2abb ]
> ] 0.097609562 --x- d32767v2 2800e(2:8:e) 2 [ 7fff 49e9 ]
> ] 0.097609620 ---x d32767v3 2800f(2:8:f) 3 [ 7 8c2 1c9c380 ]
> 0.097609660 -x-- d32767v1 pm_idle_end c1
> ] 0.097609690 --x- d32767v2 2800f(2:8:f) 3 [ 7 3d7 1c9c380 ]
> ] 0.097609780 ---x d32767v3 2800a(2:8:a) 4 [ 7fff 3 7 4 ]
> ] 0.097609845 --x- d32767v2 2800a(2:8:a) 4 [ 7fff 2 7 0 ]
> 0.097609933 ---x d32767v3 runstate_change d32767v3 running->runnable
> 0.097609998 --x d32767v2 runstate_change d32767v2 running->runnable
> 0.097610125 -- x d?v? runstate_change d7v4 runnable->running
> 0.097610168 --x| d?v? runstate_change d7v0 runnable->running
> 0.097612126 -x|| d32767v1 runstate_continue d32767v1 running->running
> 0.097612739 -x|| d32767v1 pm_idle_start c1
> ! 0.097614366 --x| d7v0 vmentry
> ! 0.097615354 --|x d7v4 vmentry
> ] 0.097616526 --|x d7v4 vmexit exit_reason WBINVD eip 1
> ] 0.097617403 --|x d7v4 vmentry cycles 2104
> ] 0.097618322 --|x d7v4 vmexit exit_reason CPUID eip 1073
> ] 0.097618322 --|x d7v4 cpuid [ 0 d 756e6547 6c65746e 49656e69 ]
> ] 0.097619549 --|x d7v4 vmentry cycles 2944
> ] 0.097620297 --|x d7v4 vmexit exit_reason CPUID eip 10c0
>
> ..blah blah.. bootup of the VCPU.
I'm not sure what I'm looking at here -- I see d7v4 coming online, but I
don't see the hypercall that thinks it's running on cpu 1.
-George
next prev parent reply other threads:[~2013-05-09 14:52 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-05-08 17:02 Linux guests hitting VCPUOP_stop_singleshot_timer returning -EINVAL Konrad Rzeszutek Wilk
2013-05-09 14:52 ` George Dunlap [this message]
2013-05-22 20:14 ` Konrad Rzeszutek Wilk
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=518BB820.4020408@eu.citrix.com \
--to=george.dunlap@eu.citrix.com \
--cc=konrad.wilk@oracle.com \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).