xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [BUG] hhvm running on Ubuntu 13.04 with Xen Hypervisor - linux kernel panic
@ 2013-09-18 11:21 Craig Carnell
  2013-09-18 11:23 ` Craig Carnell
                   ` (2 more replies)
  0 siblings, 3 replies; 15+ messages in thread
From: Craig Carnell @ 2013-09-18 11:21 UTC (permalink / raw)
  To: xen-devel@lists.xen.org


[-- Attachment #1.1: Type: text/plain, Size: 5065 bytes --]

Hi,

I am trying out hiphop vm (the php just in time compiler). My setup is a Rackspace Cloud Server running Ubuntu 13.04 with kernel 3.8.0-30-generic #44-Ubuntu SMP Thu Aug 22 20:52:24 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

The cloud server uses Xen Hypervisor.

Hiphopvm is compiled from source using the github repo. When running hhvm from the command line (without any options or php application) the system immediately crashes, throwing linux into a kernel panic and thus death.

I have reported this issue on hiphop github issue page:

https://github.com/facebook/hiphop-php/issues/1065

I am not sure if this is a linux kernel bug or a xen hypervisor bug:

The output of /var/log/syslog:

Sep 18 10:55:58 web kernel: [92118.674736] general protection fault: 0000 [#1] SMP
Sep 18 10:55:58 web kernel: [92118.674754] Modules linked in: xenfs(F) xen_privcmd(F) xt_tcpudp(F) nf_conntrack_ipv4(F) nf_defrag_ipv4(F) xt_state(F) nf_conntrack(F) xt_comment(F) iptable_filter(F) ip_tables(F) x_tables(F) microcode(F) lp(F) parport(F)
Sep 18 10:55:58 web kernel: [92118.674781] CPU 0
Sep 18 10:55:58 web kernel: [92118.674787] Pid: 5020, comm: hhvm Tainted: GF            3.8.0-30-generic #44-Ubuntu
Sep 18 10:55:58 web kernel: [92118.674795] RIP: e030:[<ffffffff81003046>]  [<ffffffff81003046>] native_read_pmc+0x6/0x20
Sep 18 10:55:58 web kernel: [92118.674809] RSP: e02b:ffff8800026b9d20  EFLAGS: 00010083
Sep 18 10:55:58 web kernel: [92118.674814] RAX: ffffffff81c1bd80 RBX: 0000000000000000 RCX: 0000000000000000
Sep 18 10:55:58 web kernel: [92118.674819] RDX: 0000000000005f6c RSI: ffff8800f7c81900 RDI: 0000000000000000
Sep 18 10:55:58 web kernel: [92118.674824] RBP: ffff8800026b9d20 R08: 00000000000337d8 R09: ffff8800e933dcc0
Sep 18 10:55:58 web kernel: [92118.674830] R10: 00007fff2d3caea0 R11: 0000000000000246 R12: ffff8800f87ecc00
Sep 18 10:55:58 web kernel: [92118.674835] R13: ffff800000000001 R14: ffff8800f87ecd70 R15: 0000000000000010
Sep 18 10:55:58 web kernel: [92118.674844] FS:  00007f43d4c9b180(0000) GS:ffff8800ffc00000(0000) knlGS:0000000000000000
Sep 18 10:55:58 web kernel: [92118.674850] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
Sep 18 10:55:58 web kernel: [92118.674855] CR2: 000000000105ebc0 CR3: 00000000025cd000 CR4: 0000000000000660
Sep 18 10:55:58 web kernel: [92118.674861] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Sep 18 10:55:58 web kernel: [92118.674867] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Sep 18 10:55:58 web kernel: [92118.674872] Process hhvm (pid: 5020, threadinfo ffff8800026b8000, task ffff8800f7cfc5c0)
Sep 18 10:55:58 web kernel: [92118.674879] Stack:
Sep 18 10:55:58 web kernel: [92118.674882]  ffff8800026b9d58 ffffffff81024625 0000000000000000 ffff8800f87ecc00
Sep 18 10:55:58 web kernel: [92118.674893]  ffff8800f7c8190c ffffffff811231a0 0000000000000005 ffff8800026b9d68
Sep 18 10:55:58 web kernel: [92118.674902]  ffffffff81024689 ffff8800026b9d90 ffffffff811232a6 00000000ffff02ff
Sep 18 10:55:58 web kernel: [92118.674911] Call Trace:
Sep 18 10:55:58 web kernel: [92118.674920]  [<ffffffff81024625>] x86_perf_event_update+0x55/0xb0
Sep 18 10:55:58 web kernel: [92118.674929]  [<ffffffff811231a0>] ? perf_read+0x2f0/0x2f0
Sep 18 10:55:58 web kernel: [92118.674936]  [<ffffffff81024689>] x86_pmu_read+0x9/0x10
Sep 18 10:55:58 web kernel: [92118.674942]  [<ffffffff811232a6>] __perf_event_read+0x106/0x110
Sep 18 10:55:58 web kernel: [92118.674951]  [<ffffffff810b9987>] smp_call_function_single+0x147/0x170
Sep 18 10:55:58 web kernel: [92118.674959]  [<ffffffff811240d0>] ? perf_mmap+0x2f0/0x2f0
Sep 18 10:55:58 web kernel: [92118.674966]  [<ffffffff81122dda>] perf_event_read+0x10a/0x110
Sep 18 10:55:58 web kernel: [92118.674972]  [<ffffffff811240d0>] ? perf_mmap+0x2f0/0x2f0
Sep 18 10:55:58 web kernel: [92118.674979]  [<ffffffff811240dd>] perf_event_reset+0xd/0x20
Sep 18 10:55:58 web kernel: [92118.674987]  [<ffffffff8111ff08>] perf_event_for_each_child+0x38/0xa0
Sep 18 10:55:58 web kernel: [92118.674994]  [<ffffffff811240d0>] ? perf_mmap+0x2f0/0x2f0
Sep 18 10:55:58 web kernel: [92118.675001]  [<ffffffff8112255a>] perf_ioctl+0xba/0x340
Sep 18 10:55:58 web kernel: [92118.675009]  [<ffffffff811b1885>] ? fd_install+0x25/0x30
Sep 18 10:55:58 web kernel: [92118.675016]  [<ffffffff811a60e9>] do_vfs_ioctl+0x99/0x570
Sep 18 10:55:58 web kernel: [92118.675023]  [<ffffffff811a6651>] sys_ioctl+0x91/0xb0
Sep 18 10:55:58 web kernel: [92118.675031]  [<ffffffff816d575d>] system_call_fastpath+0x1a/0x1f
Sep 18 10:55:58 web kernel: [92118.675036] Code: 00 00 00 00 00 55 89 f9 48 89 e5 0f 32 31 ff 89 c0 48 c1 e2 20 89 3e 48 09 c2 48 89 d0 5d c3 66 0f 1f 44 00 00 55 89 f9 48 89 e5 <0f> 33 89 c0 48 c1 e2 20 48 09 c2 48 89 d0 5d c3 66 2e 0f 1f 84
Sep 18 10:55:58 web kernel: [92118.675103] RIP  [<ffffffff81003046>] native_read_pmc+0x6/0x20
Sep 18 10:55:58 web kernel: [92118.675110]  RSP <ffff8800026b9d20>
Sep 18 10:55:58 web kernel: [92118.675118] ---[ end trace 1a73231ba5f74716 ]---


[-- Attachment #1.2: Type: text/html, Size: 6523 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [BUG] hhvm running on Ubuntu 13.04 with Xen Hypervisor - linux kernel panic
  2013-09-18 11:21 Craig Carnell
@ 2013-09-18 11:23 ` Craig Carnell
  2013-09-19  9:52 ` Wei Liu
  2013-09-20 20:09 ` Konrad Rzeszutek Wilk
  2 siblings, 0 replies; 15+ messages in thread
From: Craig Carnell @ 2013-09-18 11:23 UTC (permalink / raw)
  To: xen-devel@lists.xen.org


[-- Attachment #1.1: Type: text/plain, Size: 5987 bytes --]

Xen version info:

[    0.000000] Xen: [mem 0x0000000000000000-0x000000000009ffff] usable
[    0.000000] Xen: [mem 0x00000000000a0000-0x00000000000fffff] reserved
[    0.000000] Xen: [mem 0x0000000000100000-0x00000000ffffffff] usable
[    0.000000] Booting paravirtualized kernel on Xen
[    0.000000] Xen version: 4.1.3 (preserve-AD)
[    0.000000] Xen: using vcpuop timer interface
[    0.000000] installing Xen timer for CPU 0
[    0.069612] installing Xen timer for CPU 1
[    0.124379] PCI: setting up Xen PCI frontend stub
[    0.210755] Initialising Xen virtual ethernet driver.

From: Craig Carnell <ccarnell@tti-fc.com<mailto:ccarnell@tti-fc.com>>
Date: Wednesday, 18 September 2013 12:19
To: "xen-devel@lists.xen.org<mailto:xen-devel@lists.xen.org>" <xen-devel@lists.xen.org<mailto:xen-devel@lists.xen.org>>
Subject: [BUG] hhvm running on Ubuntu 13.04 with Xen Hypervisor - linux kernel panic

Hi,

I am trying out hiphop vm (the php just in time compiler). My setup is a Rackspace Cloud Server running Ubuntu 13.04 with kernel 3.8.0-30-generic #44-Ubuntu SMP Thu Aug 22 20:52:24 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

The cloud server uses Xen Hypervisor.

Hiphopvm is compiled from source using the github repo. When running hhvm from the command line (without any options or php application) the system immediately crashes, throwing linux into a kernel panic and thus death.

I have reported this issue on hiphop github issue page:

https://github.com/facebook/hiphop-php/issues/1065

I am not sure if this is a linux kernel bug or a xen hypervisor bug:

The output of /var/log/syslog:

Sep 18 10:55:58 web kernel: [92118.674736] general protection fault: 0000 [#1] SMP
Sep 18 10:55:58 web kernel: [92118.674754] Modules linked in: xenfs(F) xen_privcmd(F) xt_tcpudp(F) nf_conntrack_ipv4(F) nf_defrag_ipv4(F) xt_state(F) nf_conntrack(F) xt_comment(F) iptable_filter(F) ip_tables(F) x_tables(F) microcode(F) lp(F) parport(F)
Sep 18 10:55:58 web kernel: [92118.674781] CPU 0
Sep 18 10:55:58 web kernel: [92118.674787] Pid: 5020, comm: hhvm Tainted: GF            3.8.0-30-generic #44-Ubuntu
Sep 18 10:55:58 web kernel: [92118.674795] RIP: e030:[<ffffffff81003046>]  [<ffffffff81003046>] native_read_pmc+0x6/0x20
Sep 18 10:55:58 web kernel: [92118.674809] RSP: e02b:ffff8800026b9d20  EFLAGS: 00010083
Sep 18 10:55:58 web kernel: [92118.674814] RAX: ffffffff81c1bd80 RBX: 0000000000000000 RCX: 0000000000000000
Sep 18 10:55:58 web kernel: [92118.674819] RDX: 0000000000005f6c RSI: ffff8800f7c81900 RDI: 0000000000000000
Sep 18 10:55:58 web kernel: [92118.674824] RBP: ffff8800026b9d20 R08: 00000000000337d8 R09: ffff8800e933dcc0
Sep 18 10:55:58 web kernel: [92118.674830] R10: 00007fff2d3caea0 R11: 0000000000000246 R12: ffff8800f87ecc00
Sep 18 10:55:58 web kernel: [92118.674835] R13: ffff800000000001 R14: ffff8800f87ecd70 R15: 0000000000000010
Sep 18 10:55:58 web kernel: [92118.674844] FS:  00007f43d4c9b180(0000) GS:ffff8800ffc00000(0000) knlGS:0000000000000000
Sep 18 10:55:58 web kernel: [92118.674850] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
Sep 18 10:55:58 web kernel: [92118.674855] CR2: 000000000105ebc0 CR3: 00000000025cd000 CR4: 0000000000000660
Sep 18 10:55:58 web kernel: [92118.674861] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Sep 18 10:55:58 web kernel: [92118.674867] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Sep 18 10:55:58 web kernel: [92118.674872] Process hhvm (pid: 5020, threadinfo ffff8800026b8000, task ffff8800f7cfc5c0)
Sep 18 10:55:58 web kernel: [92118.674879] Stack:
Sep 18 10:55:58 web kernel: [92118.674882]  ffff8800026b9d58 ffffffff81024625 0000000000000000 ffff8800f87ecc00
Sep 18 10:55:58 web kernel: [92118.674893]  ffff8800f7c8190c ffffffff811231a0 0000000000000005 ffff8800026b9d68
Sep 18 10:55:58 web kernel: [92118.674902]  ffffffff81024689 ffff8800026b9d90 ffffffff811232a6 00000000ffff02ff
Sep 18 10:55:58 web kernel: [92118.674911] Call Trace:
Sep 18 10:55:58 web kernel: [92118.674920]  [<ffffffff81024625>] x86_perf_event_update+0x55/0xb0
Sep 18 10:55:58 web kernel: [92118.674929]  [<ffffffff811231a0>] ? perf_read+0x2f0/0x2f0
Sep 18 10:55:58 web kernel: [92118.674936]  [<ffffffff81024689>] x86_pmu_read+0x9/0x10
Sep 18 10:55:58 web kernel: [92118.674942]  [<ffffffff811232a6>] __perf_event_read+0x106/0x110
Sep 18 10:55:58 web kernel: [92118.674951]  [<ffffffff810b9987>] smp_call_function_single+0x147/0x170
Sep 18 10:55:58 web kernel: [92118.674959]  [<ffffffff811240d0>] ? perf_mmap+0x2f0/0x2f0
Sep 18 10:55:58 web kernel: [92118.674966]  [<ffffffff81122dda>] perf_event_read+0x10a/0x110
Sep 18 10:55:58 web kernel: [92118.674972]  [<ffffffff811240d0>] ? perf_mmap+0x2f0/0x2f0
Sep 18 10:55:58 web kernel: [92118.674979]  [<ffffffff811240dd>] perf_event_reset+0xd/0x20
Sep 18 10:55:58 web kernel: [92118.674987]  [<ffffffff8111ff08>] perf_event_for_each_child+0x38/0xa0
Sep 18 10:55:58 web kernel: [92118.674994]  [<ffffffff811240d0>] ? perf_mmap+0x2f0/0x2f0
Sep 18 10:55:58 web kernel: [92118.675001]  [<ffffffff8112255a>] perf_ioctl+0xba/0x340
Sep 18 10:55:58 web kernel: [92118.675009]  [<ffffffff811b1885>] ? fd_install+0x25/0x30
Sep 18 10:55:58 web kernel: [92118.675016]  [<ffffffff811a60e9>] do_vfs_ioctl+0x99/0x570
Sep 18 10:55:58 web kernel: [92118.675023]  [<ffffffff811a6651>] sys_ioctl+0x91/0xb0
Sep 18 10:55:58 web kernel: [92118.675031]  [<ffffffff816d575d>] system_call_fastpath+0x1a/0x1f
Sep 18 10:55:58 web kernel: [92118.675036] Code: 00 00 00 00 00 55 89 f9 48 89 e5 0f 32 31 ff 89 c0 48 c1 e2 20 89 3e 48 09 c2 48 89 d0 5d c3 66 0f 1f 44 00 00 55 89 f9 48 89 e5 <0f> 33 89 c0 48 c1 e2 20 48 09 c2 48 89 d0 5d c3 66 2e 0f 1f 84
Sep 18 10:55:58 web kernel: [92118.675103] RIP  [<ffffffff81003046>] native_read_pmc+0x6/0x20
Sep 18 10:55:58 web kernel: [92118.675110]  RSP <ffff8800026b9d20>
Sep 18 10:55:58 web kernel: [92118.675118] ---[ end trace 1a73231ba5f74716 ]---


[-- Attachment #1.2: Type: text/html, Size: 8482 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [BUG] hhvm running on Ubuntu 13.04 with Xen Hypervisor - linux kernel panic
  2013-09-18 11:21 Craig Carnell
  2013-09-18 11:23 ` Craig Carnell
@ 2013-09-19  9:52 ` Wei Liu
  2013-09-19 10:14   ` Craig Carnell
  2013-09-19 11:51   ` Dietmar Hahn
  2013-09-20 20:09 ` Konrad Rzeszutek Wilk
  2 siblings, 2 replies; 15+ messages in thread
From: Wei Liu @ 2013-09-19  9:52 UTC (permalink / raw)
  To: Craig Carnell; +Cc: wei.liu2, xen-devel@lists.xen.org

On Wed, Sep 18, 2013 at 11:21:18AM +0000, Craig Carnell wrote:
> Hi,
> 
> I am trying out hiphop vm (the php just in time compiler). My setup is a Rackspace Cloud Server running Ubuntu 13.04 with kernel 3.8.0-30-generic #44-Ubuntu SMP Thu Aug 22 20:52:24 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
> 
> The cloud server uses Xen Hypervisor.
> 
> Hiphopvm is compiled from source using the github repo. When running hhvm from the command line (without any options or php application) the system immediately crashes, throwing linux into a kernel panic and thus death.
> 
> I have reported this issue on hiphop github issue page:
> 
> https://github.com/facebook/hiphop-php/issues/1065
> 
> I am not sure if this is a linux kernel bug or a xen hypervisor bug:
> 

I'm not a expert on VPMU stuffs, but it seems that HHVM makes use of
(virtual) hardware performance counter which is not well supported at
the moment, which causes this problem.

Try to compile HHVM without hardware performance counter support might
solve this problem.

  ./configure -DNO_HARDWARE_COUNTERS=1

Wei.

> The output of /var/log/syslog:
> 
> Sep 18 10:55:58 web kernel: [92118.674736] general protection fault: 0000 [#1] SMP
> Sep 18 10:55:58 web kernel: [92118.674754] Modules linked in: xenfs(F) xen_privcmd(F) xt_tcpudp(F) nf_conntrack_ipv4(F) nf_defrag_ipv4(F) xt_state(F) nf_conntrack(F) xt_comment(F) iptable_filter(F) ip_tables(F) x_tables(F) microcode(F) lp(F) parport(F)
> Sep 18 10:55:58 web kernel: [92118.674781] CPU 0
> Sep 18 10:55:58 web kernel: [92118.674787] Pid: 5020, comm: hhvm Tainted: GF            3.8.0-30-generic #44-Ubuntu
> Sep 18 10:55:58 web kernel: [92118.674795] RIP: e030:[<ffffffff81003046>]  [<ffffffff81003046>] native_read_pmc+0x6/0x20
> Sep 18 10:55:58 web kernel: [92118.674809] RSP: e02b:ffff8800026b9d20  EFLAGS: 00010083
> Sep 18 10:55:58 web kernel: [92118.674814] RAX: ffffffff81c1bd80 RBX: 0000000000000000 RCX: 0000000000000000
> Sep 18 10:55:58 web kernel: [92118.674819] RDX: 0000000000005f6c RSI: ffff8800f7c81900 RDI: 0000000000000000
> Sep 18 10:55:58 web kernel: [92118.674824] RBP: ffff8800026b9d20 R08: 00000000000337d8 R09: ffff8800e933dcc0
> Sep 18 10:55:58 web kernel: [92118.674830] R10: 00007fff2d3caea0 R11: 0000000000000246 R12: ffff8800f87ecc00
> Sep 18 10:55:58 web kernel: [92118.674835] R13: ffff800000000001 R14: ffff8800f87ecd70 R15: 0000000000000010
> Sep 18 10:55:58 web kernel: [92118.674844] FS:  00007f43d4c9b180(0000) GS:ffff8800ffc00000(0000) knlGS:0000000000000000
> Sep 18 10:55:58 web kernel: [92118.674850] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
> Sep 18 10:55:58 web kernel: [92118.674855] CR2: 000000000105ebc0 CR3: 00000000025cd000 CR4: 0000000000000660
> Sep 18 10:55:58 web kernel: [92118.674861] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> Sep 18 10:55:58 web kernel: [92118.674867] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> Sep 18 10:55:58 web kernel: [92118.674872] Process hhvm (pid: 5020, threadinfo ffff8800026b8000, task ffff8800f7cfc5c0)
> Sep 18 10:55:58 web kernel: [92118.674879] Stack:
> Sep 18 10:55:58 web kernel: [92118.674882]  ffff8800026b9d58 ffffffff81024625 0000000000000000 ffff8800f87ecc00
> Sep 18 10:55:58 web kernel: [92118.674893]  ffff8800f7c8190c ffffffff811231a0 0000000000000005 ffff8800026b9d68
> Sep 18 10:55:58 web kernel: [92118.674902]  ffffffff81024689 ffff8800026b9d90 ffffffff811232a6 00000000ffff02ff
> Sep 18 10:55:58 web kernel: [92118.674911] Call Trace:
> Sep 18 10:55:58 web kernel: [92118.674920]  [<ffffffff81024625>] x86_perf_event_update+0x55/0xb0
> Sep 18 10:55:58 web kernel: [92118.674929]  [<ffffffff811231a0>] ? perf_read+0x2f0/0x2f0
> Sep 18 10:55:58 web kernel: [92118.674936]  [<ffffffff81024689>] x86_pmu_read+0x9/0x10
> Sep 18 10:55:58 web kernel: [92118.674942]  [<ffffffff811232a6>] __perf_event_read+0x106/0x110
> Sep 18 10:55:58 web kernel: [92118.674951]  [<ffffffff810b9987>] smp_call_function_single+0x147/0x170
> Sep 18 10:55:58 web kernel: [92118.674959]  [<ffffffff811240d0>] ? perf_mmap+0x2f0/0x2f0
> Sep 18 10:55:58 web kernel: [92118.674966]  [<ffffffff81122dda>] perf_event_read+0x10a/0x110
> Sep 18 10:55:58 web kernel: [92118.674972]  [<ffffffff811240d0>] ? perf_mmap+0x2f0/0x2f0
> Sep 18 10:55:58 web kernel: [92118.674979]  [<ffffffff811240dd>] perf_event_reset+0xd/0x20
> Sep 18 10:55:58 web kernel: [92118.674987]  [<ffffffff8111ff08>] perf_event_for_each_child+0x38/0xa0
> Sep 18 10:55:58 web kernel: [92118.674994]  [<ffffffff811240d0>] ? perf_mmap+0x2f0/0x2f0
> Sep 18 10:55:58 web kernel: [92118.675001]  [<ffffffff8112255a>] perf_ioctl+0xba/0x340
> Sep 18 10:55:58 web kernel: [92118.675009]  [<ffffffff811b1885>] ? fd_install+0x25/0x30
> Sep 18 10:55:58 web kernel: [92118.675016]  [<ffffffff811a60e9>] do_vfs_ioctl+0x99/0x570
> Sep 18 10:55:58 web kernel: [92118.675023]  [<ffffffff811a6651>] sys_ioctl+0x91/0xb0
> Sep 18 10:55:58 web kernel: [92118.675031]  [<ffffffff816d575d>] system_call_fastpath+0x1a/0x1f
> Sep 18 10:55:58 web kernel: [92118.675036] Code: 00 00 00 00 00 55 89 f9 48 89 e5 0f 32 31 ff 89 c0 48 c1 e2 20 89 3e 48 09 c2 48 89 d0 5d c3 66 0f 1f 44 00 00 55 89 f9 48 89 e5 <0f> 33 89 c0 48 c1 e2 20 48 09 c2 48 89 d0 5d c3 66 2e 0f 1f 84
> Sep 18 10:55:58 web kernel: [92118.675103] RIP  [<ffffffff81003046>] native_read_pmc+0x6/0x20
> Sep 18 10:55:58 web kernel: [92118.675110]  RSP <ffff8800026b9d20>
> Sep 18 10:55:58 web kernel: [92118.675118] ---[ end trace 1a73231ba5f74716 ]---
> 

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [BUG] hhvm running on Ubuntu 13.04 with Xen Hypervisor - linux kernel panic
  2013-09-19  9:52 ` Wei Liu
@ 2013-09-19 10:14   ` Craig Carnell
  2013-09-19 10:28     ` Wei Liu
  2013-09-19 11:51   ` Dietmar Hahn
  1 sibling, 1 reply; 15+ messages in thread
From: Craig Carnell @ 2013-09-19 10:14 UTC (permalink / raw)
  To: Wei Liu; +Cc: xen-devel@lists.xen.org

Thanks! But when I run

./configure -DNO_HARDWARE_COUNTERS=1


I get:

Manually-specified variables were not used by the project:

    NO_HARDWARE_COUNTERS



On 19/09/2013 10:52, "Wei Liu" <wei.liu2@citrix.com> wrote:

>On Wed, Sep 18, 2013 at 11:21:18AM +0000, Craig Carnell wrote:
>> Hi,
>> 
>> I am trying out hiphop vm (the php just in time compiler). My setup is
>>a Rackspace Cloud Server running Ubuntu 13.04 with kernel
>>3.8.0-30-generic #44-Ubuntu SMP Thu Aug 22 20:52:24 UTC 2013 x86_64
>>x86_64 x86_64 GNU/Linux
>> 
>> The cloud server uses Xen Hypervisor.
>> 
>> Hiphopvm is compiled from source using the github repo. When running
>>hhvm from the command line (without any options or php application) the
>>system immediately crashes, throwing linux into a kernel panic and thus
>>death.
>> 
>> I have reported this issue on hiphop github issue page:
>> 
>> https://github.com/facebook/hiphop-php/issues/1065
>> 
>> I am not sure if this is a linux kernel bug or a xen hypervisor bug:
>> 
>
>I'm not a expert on VPMU stuffs, but it seems that HHVM makes use of
>(virtual) hardware performance counter which is not well supported at
>the moment, which causes this problem.
>
>Try to compile HHVM without hardware performance counter support might
>solve this problem.
>
>  ./configure -DNO_HARDWARE_COUNTERS=1
>
>Wei.
>
>> The output of /var/log/syslog:
>> 
>> Sep 18 10:55:58 web kernel: [92118.674736] general protection fault:
>>0000 [#1] SMP
>> Sep 18 10:55:58 web kernel: [92118.674754] Modules linked in: xenfs(F)
>>xen_privcmd(F) xt_tcpudp(F) nf_conntrack_ipv4(F) nf_defrag_ipv4(F)
>>xt_state(F) nf_conntrack(F) xt_comment(F) iptable_filter(F) ip_tables(F)
>>x_tables(F) microcode(F) lp(F) parport(F)
>> Sep 18 10:55:58 web kernel: [92118.674781] CPU 0
>> Sep 18 10:55:58 web kernel: [92118.674787] Pid: 5020, comm: hhvm
>>Tainted: GF            3.8.0-30-generic #44-Ubuntu
>> Sep 18 10:55:58 web kernel: [92118.674795] RIP:
>>e030:[<ffffffff81003046>]  [<ffffffff81003046>] native_read_pmc+0x6/0x20
>> Sep 18 10:55:58 web kernel: [92118.674809] RSP: e02b:ffff8800026b9d20
>>EFLAGS: 00010083
>> Sep 18 10:55:58 web kernel: [92118.674814] RAX: ffffffff81c1bd80 RBX:
>>0000000000000000 RCX: 0000000000000000
>> Sep 18 10:55:58 web kernel: [92118.674819] RDX: 0000000000005f6c RSI:
>>ffff8800f7c81900 RDI: 0000000000000000
>> Sep 18 10:55:58 web kernel: [92118.674824] RBP: ffff8800026b9d20 R08:
>>00000000000337d8 R09: ffff8800e933dcc0
>> Sep 18 10:55:58 web kernel: [92118.674830] R10: 00007fff2d3caea0 R11:
>>0000000000000246 R12: ffff8800f87ecc00
>> Sep 18 10:55:58 web kernel: [92118.674835] R13: ffff800000000001 R14:
>>ffff8800f87ecd70 R15: 0000000000000010
>> Sep 18 10:55:58 web kernel: [92118.674844] FS:  00007f43d4c9b180(0000)
>>GS:ffff8800ffc00000(0000) knlGS:0000000000000000
>> Sep 18 10:55:58 web kernel: [92118.674850] CS:  e033 DS: 0000 ES: 0000
>>CR0: 000000008005003b
>> Sep 18 10:55:58 web kernel: [92118.674855] CR2: 000000000105ebc0 CR3:
>>00000000025cd000 CR4: 0000000000000660
>> Sep 18 10:55:58 web kernel: [92118.674861] DR0: 0000000000000000 DR1:
>>0000000000000000 DR2: 0000000000000000
>> Sep 18 10:55:58 web kernel: [92118.674867] DR3: 0000000000000000 DR6:
>>00000000ffff0ff0 DR7: 0000000000000400
>> Sep 18 10:55:58 web kernel: [92118.674872] Process hhvm (pid: 5020,
>>threadinfo ffff8800026b8000, task ffff8800f7cfc5c0)
>> Sep 18 10:55:58 web kernel: [92118.674879] Stack:
>> Sep 18 10:55:58 web kernel: [92118.674882]  ffff8800026b9d58
>>ffffffff81024625 0000000000000000 ffff8800f87ecc00
>> Sep 18 10:55:58 web kernel: [92118.674893]  ffff8800f7c8190c
>>ffffffff811231a0 0000000000000005 ffff8800026b9d68
>> Sep 18 10:55:58 web kernel: [92118.674902]  ffffffff81024689
>>ffff8800026b9d90 ffffffff811232a6 00000000ffff02ff
>> Sep 18 10:55:58 web kernel: [92118.674911] Call Trace:
>> Sep 18 10:55:58 web kernel: [92118.674920]  [<ffffffff81024625>]
>>x86_perf_event_update+0x55/0xb0
>> Sep 18 10:55:58 web kernel: [92118.674929]  [<ffffffff811231a0>] ?
>>perf_read+0x2f0/0x2f0
>> Sep 18 10:55:58 web kernel: [92118.674936]  [<ffffffff81024689>]
>>x86_pmu_read+0x9/0x10
>> Sep 18 10:55:58 web kernel: [92118.674942]  [<ffffffff811232a6>]
>>__perf_event_read+0x106/0x110
>> Sep 18 10:55:58 web kernel: [92118.674951]  [<ffffffff810b9987>]
>>smp_call_function_single+0x147/0x170
>> Sep 18 10:55:58 web kernel: [92118.674959]  [<ffffffff811240d0>] ?
>>perf_mmap+0x2f0/0x2f0
>> Sep 18 10:55:58 web kernel: [92118.674966]  [<ffffffff81122dda>]
>>perf_event_read+0x10a/0x110
>> Sep 18 10:55:58 web kernel: [92118.674972]  [<ffffffff811240d0>] ?
>>perf_mmap+0x2f0/0x2f0
>> Sep 18 10:55:58 web kernel: [92118.674979]  [<ffffffff811240dd>]
>>perf_event_reset+0xd/0x20
>> Sep 18 10:55:58 web kernel: [92118.674987]  [<ffffffff8111ff08>]
>>perf_event_for_each_child+0x38/0xa0
>> Sep 18 10:55:58 web kernel: [92118.674994]  [<ffffffff811240d0>] ?
>>perf_mmap+0x2f0/0x2f0
>> Sep 18 10:55:58 web kernel: [92118.675001]  [<ffffffff8112255a>]
>>perf_ioctl+0xba/0x340
>> Sep 18 10:55:58 web kernel: [92118.675009]  [<ffffffff811b1885>] ?
>>fd_install+0x25/0x30
>> Sep 18 10:55:58 web kernel: [92118.675016]  [<ffffffff811a60e9>]
>>do_vfs_ioctl+0x99/0x570
>> Sep 18 10:55:58 web kernel: [92118.675023]  [<ffffffff811a6651>]
>>sys_ioctl+0x91/0xb0
>> Sep 18 10:55:58 web kernel: [92118.675031]  [<ffffffff816d575d>]
>>system_call_fastpath+0x1a/0x1f
>> Sep 18 10:55:58 web kernel: [92118.675036] Code: 00 00 00 00 00 55 89
>>f9 48 89 e5 0f 32 31 ff 89 c0 48 c1 e2 20 89 3e 48 09 c2 48 89 d0 5d c3
>>66 0f 1f 44 00 00 55 89 f9 48 89 e5 <0f> 33 89 c0 48 c1 e2 20 48 09 c2
>>48 89 d0 5d c3 66 2e 0f 1f 84
>> Sep 18 10:55:58 web kernel: [92118.675103] RIP  [<ffffffff81003046>]
>>native_read_pmc+0x6/0x20
>> Sep 18 10:55:58 web kernel: [92118.675110]  RSP <ffff8800026b9d20>
>> Sep 18 10:55:58 web kernel: [92118.675118] ---[ end trace
>>1a73231ba5f74716 ]---
>> 
>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>
>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [BUG] hhvm running on Ubuntu 13.04 with Xen Hypervisor - linux kernel panic
  2013-09-19 10:14   ` Craig Carnell
@ 2013-09-19 10:28     ` Wei Liu
  0 siblings, 0 replies; 15+ messages in thread
From: Wei Liu @ 2013-09-19 10:28 UTC (permalink / raw)
  To: Craig Carnell; +Cc: Wei Liu, xen-devel@lists.xen.org

On Thu, Sep 19, 2013 at 10:14:45AM +0000, Craig Carnell wrote:
> Thanks! But when I run
> 
> ./configure -DNO_HARDWARE_COUNTERS=1
> 
> 
> I get:
> 
> Manually-specified variables were not used by the project:
> 
>     NO_HARDWARE_COUNTERS
> 

OK, then:
1) apparently "./configure --help" is wrong
2) you need to figure out how to pass that macro to compiler. :-)

The reason I told you to define that macro lies in hardware-counter.h.

Wei.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [BUG] hhvm running on Ubuntu 13.04 with Xen Hypervisor - linux kernel panic
  2013-09-19  9:52 ` Wei Liu
  2013-09-19 10:14   ` Craig Carnell
@ 2013-09-19 11:51   ` Dietmar Hahn
  2013-09-19 15:02     ` Craig Carnell
  1 sibling, 1 reply; 15+ messages in thread
From: Dietmar Hahn @ 2013-09-19 11:51 UTC (permalink / raw)
  To: xen-devel; +Cc: Craig Carnell, Wei Liu


[-- Attachment #1.1: Type: text/plain, Size: 6481 bytes --]

Am Donnerstag 19 September 2013, 10:52:26 schrieb Wei Liu:
> On Wed, Sep 18, 2013 at 11:21:18AM +0000, Craig Carnell wrote:
> > Hi,
> > 
> > I am trying out hiphop vm (the php just in time compiler). My setup is a Rackspace Cloud Server running Ubuntu 13.04 with kernel 3.8.0-30-generic #44-Ubuntu SMP Thu Aug 22 20:52:24 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
> > 
> > The cloud server uses Xen Hypervisor.
> > 
> > Hiphopvm is compiled from source using the github repo. When running hhvm from the command line (without any options or php application) the system immediately crashes, throwing linux into a kernel panic and thus death.
> > 
> > I have reported this issue on hiphop github issue page:
> > 
> > https://github.com/facebook/hiphop-php/issues/1065
> > 
> > I am not sure if this is a linux kernel bug or a xen hypervisor bug:
> > 
> 
> I'm not a expert on VPMU stuffs, but it seems that HHVM makes use of
> (virtual) hardware performance counter which is not well supported at
> the moment, which causes this problem.
> 
> Try to compile HHVM without hardware performance counter support might
> solve this problem.
> 
>   ./configure -DNO_HARDWARE_COUNTERS=1
> 
> Wei.
> 
> > The output of /var/log/syslog:
> > 
> > Sep 18 10:55:58 web kernel: [92118.674736] general protection fault: 0000 [#1] SMP
> > Sep 18 10:55:58 web kernel: [92118.674754] Modules linked in: xenfs(F) xen_privcmd(F) xt_tcpudp(F) nf_conntrack_ipv4(F) nf_defrag_ipv4(F) xt_state(F) nf_conntrack(F) xt_comment(F) iptable_filter(F) ip_tables(F) x_tables(F) microcode(F) lp(F) parport(F)
> > Sep 18 10:55:58 web kernel: [92118.674781] CPU 0
> > Sep 18 10:55:58 web kernel: [92118.674787] Pid: 5020, comm: hhvm Tainted: GF            3.8.0-30-generic #44-Ubuntu
> > Sep 18 10:55:58 web kernel: [92118.674795] RIP: e030:[<ffffffff81003046>]  [<ffffffff81003046>] native_read_pmc+0x6/0x20
> > Sep 18 10:55:58 web kernel: [92118.674809] RSP: e02b:ffff8800026b9d20  EFLAGS: 00010083
> > Sep 18 10:55:58 web kernel: [92118.674814] RAX: ffffffff81c1bd80 RBX: 0000000000000000 RCX: 0000000000000000
> > Sep 18 10:55:58 web kernel: [92118.674819] RDX: 0000000000005f6c RSI: ffff8800f7c81900 RDI: 0000000000000000
> > Sep 18 10:55:58 web kernel: [92118.674824] RBP: ffff8800026b9d20 R08: 00000000000337d8 R09: ffff8800e933dcc0
> > Sep 18 10:55:58 web kernel: [92118.674830] R10: 00007fff2d3caea0 R11: 0000000000000246 R12: ffff8800f87ecc00
> > Sep 18 10:55:58 web kernel: [92118.674835] R13: ffff800000000001 R14: ffff8800f87ecd70 R15: 0000000000000010
> > Sep 18 10:55:58 web kernel: [92118.674844] FS:  00007f43d4c9b180(0000) GS:ffff8800ffc00000(0000) knlGS:0000000000000000
> > Sep 18 10:55:58 web kernel: [92118.674850] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
> > Sep 18 10:55:58 web kernel: [92118.674855] CR2: 000000000105ebc0 CR3: 00000000025cd000 CR4: 0000000000000660
> > Sep 18 10:55:58 web kernel: [92118.674861] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> > Sep 18 10:55:58 web kernel: [92118.674867] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> > Sep 18 10:55:58 web kernel: [92118.674872] Process hhvm (pid: 5020, threadinfo ffff8800026b8000, task ffff8800f7cfc5c0)
> > Sep 18 10:55:58 web kernel: [92118.674879] Stack:
> > Sep 18 10:55:58 web kernel: [92118.674882]  ffff8800026b9d58 ffffffff81024625 0000000000000000 ffff8800f87ecc00
> > Sep 18 10:55:58 web kernel: [92118.674893]  ffff8800f7c8190c ffffffff811231a0 0000000000000005 ffff8800026b9d68
> > Sep 18 10:55:58 web kernel: [92118.674902]  ffffffff81024689 ffff8800026b9d90 ffffffff811232a6 00000000ffff02ff
> > Sep 18 10:55:58 web kernel: [92118.674911] Call Trace:
> > Sep 18 10:55:58 web kernel: [92118.674920]  [<ffffffff81024625>] x86_perf_event_update+0x55/0xb0
> > Sep 18 10:55:58 web kernel: [92118.674929]  [<ffffffff811231a0>] ? perf_read+0x2f0/0x2f0
> > Sep 18 10:55:58 web kernel: [92118.674936]  [<ffffffff81024689>] x86_pmu_read+0x9/0x10
> > Sep 18 10:55:58 web kernel: [92118.674942]  [<ffffffff811232a6>] __perf_event_read+0x106/0x110
> > Sep 18 10:55:58 web kernel: [92118.674951]  [<ffffffff810b9987>] smp_call_function_single+0x147/0x170
> > Sep 18 10:55:58 web kernel: [92118.674959]  [<ffffffff811240d0>] ? perf_mmap+0x2f0/0x2f0
> > Sep 18 10:55:58 web kernel: [92118.674966]  [<ffffffff81122dda>] perf_event_read+0x10a/0x110
> > Sep 18 10:55:58 web kernel: [92118.674972]  [<ffffffff811240d0>] ? perf_mmap+0x2f0/0x2f0
> > Sep 18 10:55:58 web kernel: [92118.674979]  [<ffffffff811240dd>] perf_event_reset+0xd/0x20
> > Sep 18 10:55:58 web kernel: [92118.674987]  [<ffffffff8111ff08>] perf_event_for_each_child+0x38/0xa0
> > Sep 18 10:55:58 web kernel: [92118.674994]  [<ffffffff811240d0>] ? perf_mmap+0x2f0/0x2f0
> > Sep 18 10:55:58 web kernel: [92118.675001]  [<ffffffff8112255a>] perf_ioctl+0xba/0x340
> > Sep 18 10:55:58 web kernel: [92118.675009]  [<ffffffff811b1885>] ? fd_install+0x25/0x30
> > Sep 18 10:55:58 web kernel: [92118.675016]  [<ffffffff811a60e9>] do_vfs_ioctl+0x99/0x570
> > Sep 18 10:55:58 web kernel: [92118.675023]  [<ffffffff811a6651>] sys_ioctl+0x91/0xb0
> > Sep 18 10:55:58 web kernel: [92118.675031]  [<ffffffff816d575d>] system_call_fastpath+0x1a/0x1f
> > Sep 18 10:55:58 web kernel: [92118.675036] Code: 00 00 00 00 00 55 89 f9 48 89 e5 0f 32 31 ff 89 c0 48 c1 e2 20 89 3e 48 09 c2 48 89 d0 5d c3 66 0f 1f 44 00 00 55 89 f9 48 89 e5 <0f> 33 89 c0 48 c1 e2 20 48 09 c2 48 89 d0 5d c3 66 2e 0f 1f 84

It panics on <0f> 33
The code of native_read_pmc():
ffffffff81030bc0 <native_read_pmc>:
ffffffff81030bc0:	89 f9                	mov    %edi,%ecx
ffffffff81030bc2:	0f 33                	rdpmc  
ffffffff81030bc4:	48 c1 e2 20          	shl    $0x20,%rdx

So it's the rdpmc which leads to the panic.
In the xen VPMU (on HVM)the rdpmc are not intercepted I think.
On PV I'am not sure. Maybe xm dmesg ?
Which xen version?

Dietmar.


> > Sep 18 10:55:58 web kernel: [92118.675103] RIP  [<ffffffff81003046>] native_read_pmc+0x6/0x20
> > Sep 18 10:55:58 web kernel: [92118.675110]  RSP <ffff8800026b9d20>
> > Sep 18 10:55:58 web kernel: [92118.675118] ---[ end trace 1a73231ba5f74716 ]---
> > 
> 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 
-- 
Company details: http://ts.fujitsu.com/imprint.html

[-- Attachment #1.2: Type: text/html, Size: 21974 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [BUG] hhvm running on Ubuntu 13.04 with Xen Hypervisor - linux kernel panic
  2013-09-19 11:51   ` Dietmar Hahn
@ 2013-09-19 15:02     ` Craig Carnell
  2013-09-20 12:02       ` Dietmar Hahn
  0 siblings, 1 reply; 15+ messages in thread
From: Craig Carnell @ 2013-09-19 15:02 UTC (permalink / raw)
  To: Dietmar Hahn, xen-devel@lists.xen.org; +Cc: Wei Liu


[-- Attachment #1.1: Type: text/plain, Size: 7379 bytes --]

Xen Version is 4.1.3

I'm not able to run xm it asks for xen-utils 4.1 which I install (xen-utils 4.2 installs) but it can't find it..

Sorry!


From: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com<mailto:dietmar.hahn@ts.fujitsu.com>>
Date: Thursday, 19 September 2013 12:51
To: "xen-devel@lists.xen.org<mailto:xen-devel@lists.xen.org>" <xen-devel@lists.xen.org<mailto:xen-devel@lists.xen.org>>
Cc: Wei Liu <wei.liu2@citrix.com<mailto:wei.liu2@citrix.com>>, Craig Carnell <ccarnell@tti-fc.com<mailto:ccarnell@tti-fc.com>>
Subject: Re: [Xen-devel] [BUG] hhvm running on Ubuntu 13.04 with Xen Hypervisor - linux kernel panic


Am Donnerstag 19 September 2013, 10:52:26 schrieb Wei Liu:

> On Wed, Sep 18, 2013 at 11:21:18AM +0000, Craig Carnell wrote:

> > Hi,

> >

> > I am trying out hiphop vm (the php just in time compiler). My setup is a Rackspace Cloud Server running Ubuntu 13.04 with kernel 3.8.0-30-generic #44-Ubuntu SMP Thu Aug 22 20:52:24 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

> >

> > The cloud server uses Xen Hypervisor.

> >

> > Hiphopvm is compiled from source using the github repo. When running hhvm from the command line (without any options or php application) the system immediately crashes, throwing linux into a kernel panic and thus death.

> >

> > I have reported this issue on hiphop github issue page:

> >

> > https://github.com/facebook/hiphop-php/issues/1065

> >

> > I am not sure if this is a linux kernel bug or a xen hypervisor bug:

> >

>

> I'm not a expert on VPMU stuffs, but it seems that HHVM makes use of

> (virtual) hardware performance counter which is not well supported at

> the moment, which causes this problem.

>

> Try to compile HHVM without hardware performance counter support might

> solve this problem.

>

> ./configure -DNO_HARDWARE_COUNTERS=1

>

> Wei.

>

> > The output of /var/log/syslog:

> >

> > Sep 18 10:55:58 web kernel: [92118.674736] general protection fault: 0000 [#1] SMP

> > Sep 18 10:55:58 web kernel: [92118.674754] Modules linked in: xenfs(F) xen_privcmd(F) xt_tcpudp(F) nf_conntrack_ipv4(F) nf_defrag_ipv4(F) xt_state(F) nf_conntrack(F) xt_comment(F) iptable_filter(F) ip_tables(F) x_tables(F) microcode(F) lp(F) parport(F)

> > Sep 18 10:55:58 web kernel: [92118.674781] CPU 0

> > Sep 18 10:55:58 web kernel: [92118.674787] Pid: 5020, comm: hhvm Tainted: GF 3.8.0-30-generic #44-Ubuntu

> > Sep 18 10:55:58 web kernel: [92118.674795] RIP: e030:[<ffffffff81003046>] [<ffffffff81003046>] native_read_pmc+0x6/0x20

> > Sep 18 10:55:58 web kernel: [92118.674809] RSP: e02b:ffff8800026b9d20 EFLAGS: 00010083

> > Sep 18 10:55:58 web kernel: [92118.674814] RAX: ffffffff81c1bd80 RBX: 0000000000000000 RCX: 0000000000000000

> > Sep 18 10:55:58 web kernel: [92118.674819] RDX: 0000000000005f6c RSI: ffff8800f7c81900 RDI: 0000000000000000

> > Sep 18 10:55:58 web kernel: [92118.674824] RBP: ffff8800026b9d20 R08: 00000000000337d8 R09: ffff8800e933dcc0

> > Sep 18 10:55:58 web kernel: [92118.674830] R10: 00007fff2d3caea0 R11: 0000000000000246 R12: ffff8800f87ecc00

> > Sep 18 10:55:58 web kernel: [92118.674835] R13: ffff800000000001 R14: ffff8800f87ecd70 R15: 0000000000000010

> > Sep 18 10:55:58 web kernel: [92118.674844] FS: 00007f43d4c9b180(0000) GS:ffff8800ffc00000(0000) knlGS:0000000000000000

> > Sep 18 10:55:58 web kernel: [92118.674850] CS: e033 DS: 0000 ES: 0000 CR0: 000000008005003b

> > Sep 18 10:55:58 web kernel: [92118.674855] CR2: 000000000105ebc0 CR3: 00000000025cd000 CR4: 0000000000000660

> > Sep 18 10:55:58 web kernel: [92118.674861] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000

> > Sep 18 10:55:58 web kernel: [92118.674867] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400

> > Sep 18 10:55:58 web kernel: [92118.674872] Process hhvm (pid: 5020, threadinfo ffff8800026b8000, task ffff8800f7cfc5c0)

> > Sep 18 10:55:58 web kernel: [92118.674879] Stack:

> > Sep 18 10:55:58 web kernel: [92118.674882] ffff8800026b9d58 ffffffff81024625 0000000000000000 ffff8800f87ecc00

> > Sep 18 10:55:58 web kernel: [92118.674893] ffff8800f7c8190c ffffffff811231a0 0000000000000005 ffff8800026b9d68

> > Sep 18 10:55:58 web kernel: [92118.674902] ffffffff81024689 ffff8800026b9d90 ffffffff811232a6 00000000ffff02ff

> > Sep 18 10:55:58 web kernel: [92118.674911] Call Trace:

> > Sep 18 10:55:58 web kernel: [92118.674920] [<ffffffff81024625>] x86_perf_event_update+0x55/0xb0

> > Sep 18 10:55:58 web kernel: [92118.674929] [<ffffffff811231a0>] ? perf_read+0x2f0/0x2f0

> > Sep 18 10:55:58 web kernel: [92118.674936] [<ffffffff81024689>] x86_pmu_read+0x9/0x10

> > Sep 18 10:55:58 web kernel: [92118.674942] [<ffffffff811232a6>] __perf_event_read+0x106/0x110

> > Sep 18 10:55:58 web kernel: [92118.674951] [<ffffffff810b9987>] smp_call_function_single+0x147/0x170

> > Sep 18 10:55:58 web kernel: [92118.674959] [<ffffffff811240d0>] ? perf_mmap+0x2f0/0x2f0

> > Sep 18 10:55:58 web kernel: [92118.674966] [<ffffffff81122dda>] perf_event_read+0x10a/0x110

> > Sep 18 10:55:58 web kernel: [92118.674972] [<ffffffff811240d0>] ? perf_mmap+0x2f0/0x2f0

> > Sep 18 10:55:58 web kernel: [92118.674979] [<ffffffff811240dd>] perf_event_reset+0xd/0x20

> > Sep 18 10:55:58 web kernel: [92118.674987] [<ffffffff8111ff08>] perf_event_for_each_child+0x38/0xa0

> > Sep 18 10:55:58 web kernel: [92118.674994] [<ffffffff811240d0>] ? perf_mmap+0x2f0/0x2f0

> > Sep 18 10:55:58 web kernel: [92118.675001] [<ffffffff8112255a>] perf_ioctl+0xba/0x340

> > Sep 18 10:55:58 web kernel: [92118.675009] [<ffffffff811b1885>] ? fd_install+0x25/0x30

> > Sep 18 10:55:58 web kernel: [92118.675016] [<ffffffff811a60e9>] do_vfs_ioctl+0x99/0x570

> > Sep 18 10:55:58 web kernel: [92118.675023] [<ffffffff811a6651>] sys_ioctl+0x91/0xb0

> > Sep 18 10:55:58 web kernel: [92118.675031] [<ffffffff816d575d>] system_call_fastpath+0x1a/0x1f

> > Sep 18 10:55:58 web kernel: [92118.675036] Code: 00 00 00 00 00 55 89 f9 48 89 e5 0f 32 31 ff 89 c0 48 c1 e2 20 89 3e 48 09 c2 48 89 d0 5d c3 66 0f 1f 44 00 00 55 89 f9 48 89 e5 <0f> 33 89 c0 48 c1 e2 20 48 09 c2 48 89 d0 5d c3 66 2e 0f 1f 84



It panics on <0f> 33

The code of native_read_pmc():

ffffffff81030bc0 <native_read_pmc>:

ffffffff81030bc0: 89 f9 mov %edi,%ecx

ffffffff81030bc2: 0f 33 rdpmc

ffffffff81030bc4: 48 c1 e2 20 shl $0x20,%rdx



So it's the rdpmc which leads to the panic.

In the xen VPMU (on HVM)the rdpmc are not intercepted I think.

On PV I'am not sure. Maybe xm dmesg ?

Which xen version?



Dietmar.





> > Sep 18 10:55:58 web kernel: [92118.675103] RIP [<ffffffff81003046>] native_read_pmc+0x6/0x20

> > Sep 18 10:55:58 web kernel: [92118.675110] RSP <ffff8800026b9d20>

> > Sep 18 10:55:58 web kernel: [92118.675118] ---[ end trace 1a73231ba5f74716 ]---

> >

>

> > _______________________________________________

> > Xen-devel mailing list

> > Xen-devel@lists.xen.org<mailto:Xen-devel@lists.xen.org>

> > http://lists.xen.org/xen-devel

>

>

> _______________________________________________

> Xen-devel mailing list

> Xen-devel@lists.xen.org<mailto:Xen-devel@lists.xen.org>

> http://lists.xen.org/xen-devel

>

--

Company details: http://ts.fujitsu.com/imprint.html

[-- Attachment #1.2: Type: text/html, Size: 24144 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [BUG] hhvm running on Ubuntu 13.04 with Xen Hypervisor - linux kernel panic
  2013-09-19 15:02     ` Craig Carnell
@ 2013-09-20 12:02       ` Dietmar Hahn
  2013-09-20 12:07         ` Craig Carnell
  0 siblings, 1 reply; 15+ messages in thread
From: Dietmar Hahn @ 2013-09-20 12:02 UTC (permalink / raw)
  To: xen-devel; +Cc: Craig Carnell, Wei Liu

Am Donnerstag 19 September 2013, 15:02:38 schrieb Craig Carnell:
> Xen Version is 4.1.3
> 
> I'm not able to run xm it asks for xen-utils 4.1 which I install (xen-utils 4.2 installs) but it can't find it..
> 
> Sorry!

As it seems your hhvm is running as PV domu with cpl=3 and in this case the
rdpmc leads to the general protection fault because there is no VPMU support
for PV domains.
What you can do is let your hhvm run as a HVM domain. Then you should not
get a panic.
The other way is to build your hhvm without hardware performance counters
like Wei Liu already mentioned. This is the way for linux dom0 I think.

Dietmar.

> 
> 
> From: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com<mailto:dietmar.hahn@ts.fujitsu.com>>
> Date: Thursday, 19 September 2013 12:51
> To: "xen-devel@lists.xen.org<mailto:xen-devel@lists.xen.org>" <xen-devel@lists.xen.org<mailto:xen-devel@lists.xen.org>>
> Cc: Wei Liu <wei.liu2@citrix.com<mailto:wei.liu2@citrix.com>>, Craig Carnell <ccarnell@tti-fc.com<mailto:ccarnell@tti-fc.com>>
> Subject: Re: [Xen-devel] [BUG] hhvm running on Ubuntu 13.04 with Xen Hypervisor - linux kernel panic
> 
> 
> Am Donnerstag 19 September 2013, 10:52:26 schrieb Wei Liu:
> 
> > On Wed, Sep 18, 2013 at 11:21:18AM +0000, Craig Carnell wrote:
> 
> > > Hi,
> 
> > >
> 
> > > I am trying out hiphop vm (the php just in time compiler). My setup is a Rackspace Cloud Server running Ubuntu 13.04 with kernel 3.8.0-30-generic #44-Ubuntu SMP Thu Aug 22 20:52:24 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
> 
> > >
> 
> > > The cloud server uses Xen Hypervisor.
> 
> > >
> 
> > > Hiphopvm is compiled from source using the github repo. When running hhvm from the command line (without any options or php application) the system immediately crashes, throwing linux into a kernel panic and thus death.
> 
> > >
> 
> > > I have reported this issue on hiphop github issue page:
> 
> > >
> 
> > > https://github.com/facebook/hiphop-php/issues/1065
> 
> > >
> 
> > > I am not sure if this is a linux kernel bug or a xen hypervisor bug:
> 
> > >
> 
> >
> 
> > I'm not a expert on VPMU stuffs, but it seems that HHVM makes use of
> 
> > (virtual) hardware performance counter which is not well supported at
> 
> > the moment, which causes this problem.
> 
> >
> 
> > Try to compile HHVM without hardware performance counter support might
> 
> > solve this problem.
> 
> >
> 
> > ./configure -DNO_HARDWARE_COUNTERS=1
> 
> >
> 
> > Wei.
> 
> >
> 
> > > The output of /var/log/syslog:
> 
> > >
> 
> > > Sep 18 10:55:58 web kernel: [92118.674736] general protection fault: 0000 [#1] SMP
> 
> > > Sep 18 10:55:58 web kernel: [92118.674754] Modules linked in: xenfs(F) xen_privcmd(F) xt_tcpudp(F) nf_conntrack_ipv4(F) nf_defrag_ipv4(F) xt_state(F) nf_conntrack(F) xt_comment(F) iptable_filter(F) ip_tables(F) x_tables(F) microcode(F) lp(F) parport(F)
> 
> > > Sep 18 10:55:58 web kernel: [92118.674781] CPU 0
> 
> > > Sep 18 10:55:58 web kernel: [92118.674787] Pid: 5020, comm: hhvm Tainted: GF 3.8.0-30-generic #44-Ubuntu
> 
> > > Sep 18 10:55:58 web kernel: [92118.674795] RIP: e030:[<ffffffff81003046>] [<ffffffff81003046>] native_read_pmc+0x6/0x20
> 
> > > Sep 18 10:55:58 web kernel: [92118.674809] RSP: e02b:ffff8800026b9d20 EFLAGS: 00010083
> 
> > > Sep 18 10:55:58 web kernel: [92118.674814] RAX: ffffffff81c1bd80 RBX: 0000000000000000 RCX: 0000000000000000
> 
> > > Sep 18 10:55:58 web kernel: [92118.674819] RDX: 0000000000005f6c RSI: ffff8800f7c81900 RDI: 0000000000000000
> 
> > > Sep 18 10:55:58 web kernel: [92118.674824] RBP: ffff8800026b9d20 R08: 00000000000337d8 R09: ffff8800e933dcc0
> 
> > > Sep 18 10:55:58 web kernel: [92118.674830] R10: 00007fff2d3caea0 R11: 0000000000000246 R12: ffff8800f87ecc00
> 
> > > Sep 18 10:55:58 web kernel: [92118.674835] R13: ffff800000000001 R14: ffff8800f87ecd70 R15: 0000000000000010
> 
> > > Sep 18 10:55:58 web kernel: [92118.674844] FS: 00007f43d4c9b180(0000) GS:ffff8800ffc00000(0000) knlGS:0000000000000000
> 
> > > Sep 18 10:55:58 web kernel: [92118.674850] CS: e033 DS: 0000 ES: 0000 CR0: 000000008005003b
> 
> > > Sep 18 10:55:58 web kernel: [92118.674855] CR2: 000000000105ebc0 CR3: 00000000025cd000 CR4: 0000000000000660
> 
> > > Sep 18 10:55:58 web kernel: [92118.674861] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> 
> > > Sep 18 10:55:58 web kernel: [92118.674867] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> 
> > > Sep 18 10:55:58 web kernel: [92118.674872] Process hhvm (pid: 5020, threadinfo ffff8800026b8000, task ffff8800f7cfc5c0)
> 
> > > Sep 18 10:55:58 web kernel: [92118.674879] Stack:
> 
> > > Sep 18 10:55:58 web kernel: [92118.674882] ffff8800026b9d58 ffffffff81024625 0000000000000000 ffff8800f87ecc00
> 
> > > Sep 18 10:55:58 web kernel: [92118.674893] ffff8800f7c8190c ffffffff811231a0 0000000000000005 ffff8800026b9d68
> 
> > > Sep 18 10:55:58 web kernel: [92118.674902] ffffffff81024689 ffff8800026b9d90 ffffffff811232a6 00000000ffff02ff
> 
> > > Sep 18 10:55:58 web kernel: [92118.674911] Call Trace:
> 
> > > Sep 18 10:55:58 web kernel: [92118.674920] [<ffffffff81024625>] x86_perf_event_update+0x55/0xb0
> 
> > > Sep 18 10:55:58 web kernel: [92118.674929] [<ffffffff811231a0>] ? perf_read+0x2f0/0x2f0
> 
> > > Sep 18 10:55:58 web kernel: [92118.674936] [<ffffffff81024689>] x86_pmu_read+0x9/0x10
> 
> > > Sep 18 10:55:58 web kernel: [92118.674942] [<ffffffff811232a6>] __perf_event_read+0x106/0x110
> 
> > > Sep 18 10:55:58 web kernel: [92118.674951] [<ffffffff810b9987>] smp_call_function_single+0x147/0x170
> 
> > > Sep 18 10:55:58 web kernel: [92118.674959] [<ffffffff811240d0>] ? perf_mmap+0x2f0/0x2f0
> 
> > > Sep 18 10:55:58 web kernel: [92118.674966] [<ffffffff81122dda>] perf_event_read+0x10a/0x110
> 
> > > Sep 18 10:55:58 web kernel: [92118.674972] [<ffffffff811240d0>] ? perf_mmap+0x2f0/0x2f0
> 
> > > Sep 18 10:55:58 web kernel: [92118.674979] [<ffffffff811240dd>] perf_event_reset+0xd/0x20
> 
> > > Sep 18 10:55:58 web kernel: [92118.674987] [<ffffffff8111ff08>] perf_event_for_each_child+0x38/0xa0
> 
> > > Sep 18 10:55:58 web kernel: [92118.674994] [<ffffffff811240d0>] ? perf_mmap+0x2f0/0x2f0
> 
> > > Sep 18 10:55:58 web kernel: [92118.675001] [<ffffffff8112255a>] perf_ioctl+0xba/0x340
> 
> > > Sep 18 10:55:58 web kernel: [92118.675009] [<ffffffff811b1885>] ? fd_install+0x25/0x30
> 
> > > Sep 18 10:55:58 web kernel: [92118.675016] [<ffffffff811a60e9>] do_vfs_ioctl+0x99/0x570
> 
> > > Sep 18 10:55:58 web kernel: [92118.675023] [<ffffffff811a6651>] sys_ioctl+0x91/0xb0
> 
> > > Sep 18 10:55:58 web kernel: [92118.675031] [<ffffffff816d575d>] system_call_fastpath+0x1a/0x1f
> 
> > > Sep 18 10:55:58 web kernel: [92118.675036] Code: 00 00 00 00 00 55 89 f9 48 89 e5 0f 32 31 ff 89 c0 48 c1 e2 20 89 3e 48 09 c2 48 89 d0 5d c3 66 0f 1f 44 00 00 55 89 f9 48 89 e5 <0f> 33 89 c0 48 c1 e2 20 48 09 c2 48 89 d0 5d c3 66 2e 0f 1f 84
> 
> 
> 
> It panics on <0f> 33
> The code of native_read_pmc():
> ffffffff81030bc0 <native_read_pmc>:
> ffffffff81030bc0: 89 f9 mov %edi,%ecx
> ffffffff81030bc2: 0f 33 rdpmc
> ffffffff81030bc4: 48 c1 e2 20 shl $0x20,%rdx
> 
> So it's the rdpmc which leads to the panic.
> In the xen VPMU (on HVM)the rdpmc are not intercepted I think.
> On PV I'am not sure. Maybe xm dmesg ?
> Which xen version?
> 
> 
> 
> Dietmar.

-- 
Company details: http://ts.fujitsu.com/imprint.html

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [BUG] hhvm running on Ubuntu 13.04 with Xen Hypervisor - linux kernel panic
  2013-09-20 12:02       ` Dietmar Hahn
@ 2013-09-20 12:07         ` Craig Carnell
  2013-09-20 12:33           ` Dietmar Hahn
  0 siblings, 1 reply; 15+ messages in thread
From: Craig Carnell @ 2013-09-20 12:07 UTC (permalink / raw)
  To: Dietmar Hahn, xen-devel@lists.xen.org; +Cc: Wei Liu

Unfortunately the servers are provided "as is" by rackspace unless this is
something I can change in the terminal (unlikely).

Is there any performance loss from not using hardware performance
counters? Also, do you know the configure command as it did not recognise
it when I tried.

Thanks

On 20/09/2013 13:02, "Dietmar Hahn" <dietmar.hahn@ts.fujitsu.com> wrote:

>Am Donnerstag 19 September 2013, 15:02:38 schrieb Craig Carnell:
>> Xen Version is 4.1.3
>> 
>> I'm not able to run xm it asks for xen-utils 4.1 which I install
>>(xen-utils 4.2 installs) but it can't find it..
>> 
>> Sorry!
>
>As it seems your hhvm is running as PV domu with cpl=3 and in this case
>the
>rdpmc leads to the general protection fault because there is no VPMU
>support
>for PV domains.
>What you can do is let your hhvm run as a HVM domain. Then you should not
>get a panic.
>The other way is to build your hhvm without hardware performance counters
>like Wei Liu already mentioned. This is the way for linux dom0 I think.
>
>Dietmar.
>
>> 
>> 
>> From: Dietmar Hahn
>><dietmar.hahn@ts.fujitsu.com<mailto:dietmar.hahn@ts.fujitsu.com>>
>> Date: Thursday, 19 September 2013 12:51
>> To: "xen-devel@lists.xen.org<mailto:xen-devel@lists.xen.org>"
>><xen-devel@lists.xen.org<mailto:xen-devel@lists.xen.org>>
>> Cc: Wei Liu <wei.liu2@citrix.com<mailto:wei.liu2@citrix.com>>, Craig
>>Carnell <ccarnell@tti-fc.com<mailto:ccarnell@tti-fc.com>>
>> Subject: Re: [Xen-devel] [BUG] hhvm running on Ubuntu 13.04 with Xen
>>Hypervisor - linux kernel panic
>> 
>> 
>> Am Donnerstag 19 September 2013, 10:52:26 schrieb Wei Liu:
>> 
>> > On Wed, Sep 18, 2013 at 11:21:18AM +0000, Craig Carnell wrote:
>> 
>> > > Hi,
>> 
>> > >
>> 
>> > > I am trying out hiphop vm (the php just in time compiler). My setup
>>is a Rackspace Cloud Server running Ubuntu 13.04 with kernel
>>3.8.0-30-generic #44-Ubuntu SMP Thu Aug 22 20:52:24 UTC 2013 x86_64
>>x86_64 x86_64 GNU/Linux
>> 
>> > >
>> 
>> > > The cloud server uses Xen Hypervisor.
>> 
>> > >
>> 
>> > > Hiphopvm is compiled from source using the github repo. When
>>running hhvm from the command line (without any options or php
>>application) the system immediately crashes, throwing linux into a
>>kernel panic and thus death.
>> 
>> > >
>> 
>> > > I have reported this issue on hiphop github issue page:
>> 
>> > >
>> 
>> > > https://github.com/facebook/hiphop-php/issues/1065
>> 
>> > >
>> 
>> > > I am not sure if this is a linux kernel bug or a xen hypervisor bug:
>> 
>> > >
>> 
>> >
>> 
>> > I'm not a expert on VPMU stuffs, but it seems that HHVM makes use of
>> 
>> > (virtual) hardware performance counter which is not well supported at
>> 
>> > the moment, which causes this problem.
>> 
>> >
>> 
>> > Try to compile HHVM without hardware performance counter support might
>> 
>> > solve this problem.
>> 
>> >
>> 
>> > ./configure -DNO_HARDWARE_COUNTERS=1
>> 
>> >
>> 
>> > Wei.
>> 
>> >
>> 
>> > > The output of /var/log/syslog:
>> 
>> > >
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674736] general protection
>>fault: 0000 [#1] SMP
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674754] Modules linked in:
>>xenfs(F) xen_privcmd(F) xt_tcpudp(F) nf_conntrack_ipv4(F)
>>nf_defrag_ipv4(F) xt_state(F) nf_conntrack(F) xt_comment(F)
>>iptable_filter(F) ip_tables(F) x_tables(F) microcode(F) lp(F) parport(F)
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674781] CPU 0
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674787] Pid: 5020, comm: hhvm
>>Tainted: GF 3.8.0-30-generic #44-Ubuntu
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674795] RIP:
>>e030:[<ffffffff81003046>] [<ffffffff81003046>] native_read_pmc+0x6/0x20
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674809] RSP:
>>e02b:ffff8800026b9d20 EFLAGS: 00010083
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674814] RAX: ffffffff81c1bd80
>>RBX: 0000000000000000 RCX: 0000000000000000
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674819] RDX: 0000000000005f6c
>>RSI: ffff8800f7c81900 RDI: 0000000000000000
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674824] RBP: ffff8800026b9d20
>>R08: 00000000000337d8 R09: ffff8800e933dcc0
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674830] R10: 00007fff2d3caea0
>>R11: 0000000000000246 R12: ffff8800f87ecc00
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674835] R13: ffff800000000001
>>R14: ffff8800f87ecd70 R15: 0000000000000010
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674844] FS:
>>00007f43d4c9b180(0000) GS:ffff8800ffc00000(0000) knlGS:0000000000000000
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674850] CS: e033 DS: 0000 ES:
>>0000 CR0: 000000008005003b
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674855] CR2: 000000000105ebc0
>>CR3: 00000000025cd000 CR4: 0000000000000660
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674861] DR0: 0000000000000000
>>DR1: 0000000000000000 DR2: 0000000000000000
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674867] DR3: 0000000000000000
>>DR6: 00000000ffff0ff0 DR7: 0000000000000400
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674872] Process hhvm (pid: 5020,
>>threadinfo ffff8800026b8000, task ffff8800f7cfc5c0)
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674879] Stack:
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674882] ffff8800026b9d58
>>ffffffff81024625 0000000000000000 ffff8800f87ecc00
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674893] ffff8800f7c8190c
>>ffffffff811231a0 0000000000000005 ffff8800026b9d68
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674902] ffffffff81024689
>>ffff8800026b9d90 ffffffff811232a6 00000000ffff02ff
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674911] Call Trace:
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674920] [<ffffffff81024625>]
>>x86_perf_event_update+0x55/0xb0
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674929] [<ffffffff811231a0>] ?
>>perf_read+0x2f0/0x2f0
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674936] [<ffffffff81024689>]
>>x86_pmu_read+0x9/0x10
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674942] [<ffffffff811232a6>]
>>__perf_event_read+0x106/0x110
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674951] [<ffffffff810b9987>]
>>smp_call_function_single+0x147/0x170
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674959] [<ffffffff811240d0>] ?
>>perf_mmap+0x2f0/0x2f0
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674966] [<ffffffff81122dda>]
>>perf_event_read+0x10a/0x110
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674972] [<ffffffff811240d0>] ?
>>perf_mmap+0x2f0/0x2f0
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674979] [<ffffffff811240dd>]
>>perf_event_reset+0xd/0x20
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674987] [<ffffffff8111ff08>]
>>perf_event_for_each_child+0x38/0xa0
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.674994] [<ffffffff811240d0>] ?
>>perf_mmap+0x2f0/0x2f0
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.675001] [<ffffffff8112255a>]
>>perf_ioctl+0xba/0x340
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.675009] [<ffffffff811b1885>] ?
>>fd_install+0x25/0x30
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.675016] [<ffffffff811a60e9>]
>>do_vfs_ioctl+0x99/0x570
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.675023] [<ffffffff811a6651>]
>>sys_ioctl+0x91/0xb0
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.675031] [<ffffffff816d575d>]
>>system_call_fastpath+0x1a/0x1f
>> 
>> > > Sep 18 10:55:58 web kernel: [92118.675036] Code: 00 00 00 00 00 55
>>89 f9 48 89 e5 0f 32 31 ff 89 c0 48 c1 e2 20 89 3e 48 09 c2 48 89 d0 5d
>>c3 66 0f 1f 44 00 00 55 89 f9 48 89 e5 <0f> 33 89 c0 48 c1 e2 20 48 09
>>c2 48 89 d0 5d c3 66 2e 0f 1f 84
>> 
>> 
>> 
>> It panics on <0f> 33
>> The code of native_read_pmc():
>> ffffffff81030bc0 <native_read_pmc>:
>> ffffffff81030bc0: 89 f9 mov %edi,%ecx
>> ffffffff81030bc2: 0f 33 rdpmc
>> ffffffff81030bc4: 48 c1 e2 20 shl $0x20,%rdx
>> 
>> So it's the rdpmc which leads to the panic.
>> In the xen VPMU (on HVM)the rdpmc are not intercepted I think.
>> On PV I'am not sure. Maybe xm dmesg ?
>> Which xen version?
>> 
>> 
>> 
>> Dietmar.
>
>-- 
>Company details: http://ts.fujitsu.com/imprint.html
>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [BUG] hhvm running on Ubuntu 13.04 with Xen Hypervisor - linux kernel panic
  2013-09-20 12:07         ` Craig Carnell
@ 2013-09-20 12:33           ` Dietmar Hahn
  0 siblings, 0 replies; 15+ messages in thread
From: Dietmar Hahn @ 2013-09-20 12:33 UTC (permalink / raw)
  To: xen-devel; +Cc: Craig Carnell, Wei Liu

Am Freitag 20 September 2013, 12:07:00 schrieb Craig Carnell:
> Unfortunately the servers are provided "as is" by rackspace unless this is
> something I can change in the terminal (unlikely).
> 
> Is there any performance loss from not using hardware performance
> counters?

I don't know hhvm and so I don't know for what the counters are used in
this case.
Normally linux uses 1 counter for watchdog handling. If the counters
are not usable the watchdog handling switches to the timer interrupt I think.
The other use case is the perf command.

Another attempt would be to patch the native_read_pmc() function with
a dummy returning always 0 and  see what happens.

> Also, do you know the configure command as it did not recognise
> it when I tried.

No unfortunately not :-(

Dietmar.

> 
> Thanks
> 
> On 20/09/2013 13:02, "Dietmar Hahn" <dietmar.hahn@ts.fujitsu.com> wrote:
> 
> >Am Donnerstag 19 September 2013, 15:02:38 schrieb Craig Carnell:
> >> Xen Version is 4.1.3
> >> 
> >> I'm not able to run xm it asks for xen-utils 4.1 which I install
> >>(xen-utils 4.2 installs) but it can't find it..
> >> 
> >> Sorry!
> >
> >As it seems your hhvm is running as PV domu with cpl=3 and in this case
> >the
> >rdpmc leads to the general protection fault because there is no VPMU
> >support
> >for PV domains.
> >What you can do is let your hhvm run as a HVM domain. Then you should not
> >get a panic.
> >The other way is to build your hhvm without hardware performance counters
> >like Wei Liu already mentioned. This is the way for linux dom0 I think.

-- 
Company details: http://ts.fujitsu.com/imprint.html

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [BUG] hhvm running on Ubuntu 13.04 with Xen Hypervisor - linux kernel panic
  2013-09-18 11:21 Craig Carnell
  2013-09-18 11:23 ` Craig Carnell
  2013-09-19  9:52 ` Wei Liu
@ 2013-09-20 20:09 ` Konrad Rzeszutek Wilk
  2013-09-30  9:01   ` Craig Carnell
  2 siblings, 1 reply; 15+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-09-20 20:09 UTC (permalink / raw)
  To: Craig Carnell; +Cc: xen-devel@lists.xen.org

On Wed, Sep 18, 2013 at 11:21:18AM +0000, Craig Carnell wrote:
> Hi,
> 
> I am trying out hiphop vm (the php just in time compiler). My setup is a Rackspace Cloud Server running Ubuntu 13.04 with kernel 3.8.0-30-generic #44-Ubuntu SMP Thu Aug 22 20:52:24 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
> 
> The cloud server uses Xen Hypervisor.
> 
> Hiphopvm is compiled from source using the github repo. When running hhvm from the command line (without any options or php application) the system immediately crashes, throwing linux into a kernel panic and thus death.
> 

And what happens if you run 'perf' by itself?


> I have reported this issue on hiphop github issue page:
> 
> https://github.com/facebook/hiphop-php/issues/1065
> 
> I am not sure if this is a linux kernel bug or a xen hypervisor bug:
> 
> The output of /var/log/syslog:
> 
> Sep 18 10:55:58 web kernel: [92118.674736] general protection fault: 0000 [#1] SMP
> Sep 18 10:55:58 web kernel: [92118.674754] Modules linked in: xenfs(F) xen_privcmd(F) xt_tcpudp(F) nf_conntrack_ipv4(F) nf_defrag_ipv4(F) xt_state(F) nf_conntrack(F) xt_comment(F) iptable_filter(F) ip_tables(F) x_tables(F) microcode(F) lp(F) parport(F)
> Sep 18 10:55:58 web kernel: [92118.674781] CPU 0
> Sep 18 10:55:58 web kernel: [92118.674787] Pid: 5020, comm: hhvm Tainted: GF            3.8.0-30-generic #44-Ubuntu
> Sep 18 10:55:58 web kernel: [92118.674795] RIP: e030:[<ffffffff81003046>]  [<ffffffff81003046>] native_read_pmc+0x6/0x20
> Sep 18 10:55:58 web kernel: [92118.674809] RSP: e02b:ffff8800026b9d20  EFLAGS: 00010083
> Sep 18 10:55:58 web kernel: [92118.674814] RAX: ffffffff81c1bd80 RBX: 0000000000000000 RCX: 0000000000000000
> Sep 18 10:55:58 web kernel: [92118.674819] RDX: 0000000000005f6c RSI: ffff8800f7c81900 RDI: 0000000000000000
> Sep 18 10:55:58 web kernel: [92118.674824] RBP: ffff8800026b9d20 R08: 00000000000337d8 R09: ffff8800e933dcc0
> Sep 18 10:55:58 web kernel: [92118.674830] R10: 00007fff2d3caea0 R11: 0000000000000246 R12: ffff8800f87ecc00
> Sep 18 10:55:58 web kernel: [92118.674835] R13: ffff800000000001 R14: ffff8800f87ecd70 R15: 0000000000000010
> Sep 18 10:55:58 web kernel: [92118.674844] FS:  00007f43d4c9b180(0000) GS:ffff8800ffc00000(0000) knlGS:0000000000000000
> Sep 18 10:55:58 web kernel: [92118.674850] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
> Sep 18 10:55:58 web kernel: [92118.674855] CR2: 000000000105ebc0 CR3: 00000000025cd000 CR4: 0000000000000660
> Sep 18 10:55:58 web kernel: [92118.674861] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> Sep 18 10:55:58 web kernel: [92118.674867] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> Sep 18 10:55:58 web kernel: [92118.674872] Process hhvm (pid: 5020, threadinfo ffff8800026b8000, task ffff8800f7cfc5c0)
> Sep 18 10:55:58 web kernel: [92118.674879] Stack:
> Sep 18 10:55:58 web kernel: [92118.674882]  ffff8800026b9d58 ffffffff81024625 0000000000000000 ffff8800f87ecc00
> Sep 18 10:55:58 web kernel: [92118.674893]  ffff8800f7c8190c ffffffff811231a0 0000000000000005 ffff8800026b9d68
> Sep 18 10:55:58 web kernel: [92118.674902]  ffffffff81024689 ffff8800026b9d90 ffffffff811232a6 00000000ffff02ff
> Sep 18 10:55:58 web kernel: [92118.674911] Call Trace:
> Sep 18 10:55:58 web kernel: [92118.674920]  [<ffffffff81024625>] x86_perf_event_update+0x55/0xb0
> Sep 18 10:55:58 web kernel: [92118.674929]  [<ffffffff811231a0>] ? perf_read+0x2f0/0x2f0
> Sep 18 10:55:58 web kernel: [92118.674936]  [<ffffffff81024689>] x86_pmu_read+0x9/0x10
> Sep 18 10:55:58 web kernel: [92118.674942]  [<ffffffff811232a6>] __perf_event_read+0x106/0x110
> Sep 18 10:55:58 web kernel: [92118.674951]  [<ffffffff810b9987>] smp_call_function_single+0x147/0x170
> Sep 18 10:55:58 web kernel: [92118.674959]  [<ffffffff811240d0>] ? perf_mmap+0x2f0/0x2f0
> Sep 18 10:55:58 web kernel: [92118.674966]  [<ffffffff81122dda>] perf_event_read+0x10a/0x110
> Sep 18 10:55:58 web kernel: [92118.674972]  [<ffffffff811240d0>] ? perf_mmap+0x2f0/0x2f0
> Sep 18 10:55:58 web kernel: [92118.674979]  [<ffffffff811240dd>] perf_event_reset+0xd/0x20
> Sep 18 10:55:58 web kernel: [92118.674987]  [<ffffffff8111ff08>] perf_event_for_each_child+0x38/0xa0
> Sep 18 10:55:58 web kernel: [92118.674994]  [<ffffffff811240d0>] ? perf_mmap+0x2f0/0x2f0
> Sep 18 10:55:58 web kernel: [92118.675001]  [<ffffffff8112255a>] perf_ioctl+0xba/0x340
> Sep 18 10:55:58 web kernel: [92118.675009]  [<ffffffff811b1885>] ? fd_install+0x25/0x30
> Sep 18 10:55:58 web kernel: [92118.675016]  [<ffffffff811a60e9>] do_vfs_ioctl+0x99/0x570
> Sep 18 10:55:58 web kernel: [92118.675023]  [<ffffffff811a6651>] sys_ioctl+0x91/0xb0
> Sep 18 10:55:58 web kernel: [92118.675031]  [<ffffffff816d575d>] system_call_fastpath+0x1a/0x1f
> Sep 18 10:55:58 web kernel: [92118.675036] Code: 00 00 00 00 00 55 89 f9 48 89 e5 0f 32 31 ff 89 c0 48 c1 e2 20 89 3e 48 09 c2 48 89 d0 5d c3 66 0f 1f 44 00 00 55 89 f9 48 89 e5 <0f> 33 89 c0 48 c1 e2 20 48 09 c2 48 89 d0 5d c3 66 2e 0f 1f 84
> Sep 18 10:55:58 web kernel: [92118.675103] RIP  [<ffffffff81003046>] native_read_pmc+0x6/0x20
> Sep 18 10:55:58 web kernel: [92118.675110]  RSP <ffff8800026b9d20>
> Sep 18 10:55:58 web kernel: [92118.675118] ---[ end trace 1a73231ba5f74716 ]---
> 

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [BUG] hhvm running on Ubuntu 13.04 with Xen Hypervisor - linux kernel panic
@ 2013-09-22  0:08 Boris Ostrovsky
  2013-09-30 11:02 ` Craig Carnell
  0 siblings, 1 reply; 15+ messages in thread
From: Boris Ostrovsky @ 2013-09-22  0:08 UTC (permalink / raw)
  To: konrad.wilk; +Cc: ccarnell, xen-devel


----- konrad.wilk@oracle.com wrote:

> On Wed, Sep 18, 2013 at 11:21:18AM +0000, Craig Carnell wrote:
> > Hi,
> > 
> > I am trying out hiphop vm (the php just in time compiler). My setup
> is a Rackspace Cloud Server running Ubuntu 13.04 with kernel
> 3.8.0-30-generic #44-Ubuntu SMP Thu Aug 22 20:52:24 UTC 2013 x86_64
> x86_64 x86_64 GNU/Linux
> > 
> > The cloud server uses Xen Hypervisor.
> > 
> > Hiphopvm is compiled from source using the github repo. When running
> hhvm from the command line (without any options or php application)
> the system immediately crashes, throwing linux into a kernel panic and
> thus death.
> > 
> 
> And what happens if you run 'perf' by itself?
> 
> 
> > I have reported this issue on hiphop github issue page:
> > 
> > https://github.com/facebook/hiphop-php/issues/1065
> > 
> > I am not sure if this is a linux kernel bug or a xen hypervisor
> bug:
> > 
> > The output of /var/log/syslog:
> > 
> > Sep 18 10:55:58 web kernel: [92118.674736] general protection fault:
> 0000 [#1] SMP
> > Sep 18 10:55:58 web kernel: [92118.674754] Modules linked in:
> xenfs(F) xen_privcmd(F) xt_tcpudp(F) nf_conntrack_ipv4(F)
> nf_defrag_ipv4(F) xt_state(F) nf_conntrack(F) xt_comment(F)
> iptable_filter(F) ip_tables(F) x_tables(F) microcode(F) lp(F)
> parport(F)
> > Sep 18 10:55:58 web kernel: [92118.674781] CPU 0
> > Sep 18 10:55:58 web kernel: [92118.674787] Pid: 5020, comm: hhvm
> Tainted: GF            3.8.0-30-generic #44-Ubuntu
> > Sep 18 10:55:58 web kernel: [92118.674795] RIP:
> e030:[<ffffffff81003046>]  [<ffffffff81003046>]
> native_read_pmc+0x6/0x20


The link above seems to imply that this is a PV guest. RDPMC instruction
is not currently emulated which would cause a #GP to the guest.

I suspect that hhvm may be assuming that performance counters exist and this
is not always the case.

Can you post CPUID leaf 0xa if this is Intel processor and leaf 0x80000001
if this is AMD (from the guest)? And 'dmesg | grep -i perf'.

-boris


> > Sep 18 10:55:58 web kernel: [92118.674809] RSP:
> e02b:ffff8800026b9d20  EFLAGS: 00010083
> > Sep 18 10:55:58 web kernel: [92118.674814] RAX: ffffffff81c1bd80
> RBX: 0000000000000000 RCX: 0000000000000000
> > Sep 18 10:55:58 web kernel: [92118.674819] RDX: 0000000000005f6c
> RSI: ffff8800f7c81900 RDI: 0000000000000000
> > Sep 18 10:55:58 web kernel: [92118.674824] RBP: ffff8800026b9d20
> R08: 00000000000337d8 R09: ffff8800e933dcc0
> > Sep 18 10:55:58 web kernel: [92118.674830] R10: 00007fff2d3caea0
> R11: 0000000000000246 R12: ffff8800f87ecc00
> > Sep 18 10:55:58 web kernel: [92118.674835] R13: ffff800000000001
> R14: ffff8800f87ecd70 R15: 0000000000000010
> > Sep 18 10:55:58 web kernel: [92118.674844] FS: 
> 00007f43d4c9b180(0000) GS:ffff8800ffc00000(0000)
> knlGS:0000000000000000
> > Sep 18 10:55:58 web kernel: [92118.674850] CS:  e033 DS: 0000 ES:
> 0000 CR0: 000000008005003b
> > Sep 18 10:55:58 web kernel: [92118.674855] CR2: 000000000105ebc0
> CR3: 00000000025cd000 CR4: 0000000000000660
> > Sep 18 10:55:58 web kernel: [92118.674861] DR0: 0000000000000000
> DR1: 0000000000000000 DR2: 0000000000000000
> > Sep 18 10:55:58 web kernel: [92118.674867] DR3: 0000000000000000
> DR6: 00000000ffff0ff0 DR7: 0000000000000400
> > Sep 18 10:55:58 web kernel: [92118.674872] Process hhvm (pid: 5020,
> threadinfo ffff8800026b8000, task ffff8800f7cfc5c0)
> > Sep 18 10:55:58 web kernel: [92118.674879] Stack:
> > Sep 18 10:55:58 web kernel: [92118.674882]  ffff8800026b9d58
> ffffffff81024625 0000000000000000 ffff8800f87ecc00
> > Sep 18 10:55:58 web kernel: [92118.674893]  ffff8800f7c8190c
> ffffffff811231a0 0000000000000005 ffff8800026b9d68
> > Sep 18 10:55:58 web kernel: [92118.674902]  ffffffff81024689
> ffff8800026b9d90 ffffffff811232a6 00000000ffff02ff
> > Sep 18 10:55:58 web kernel: [92118.674911] Call Trace:
> > Sep 18 10:55:58 web kernel: [92118.674920]  [<ffffffff81024625>]
> x86_perf_event_update+0x55/0xb0
> > Sep 18 10:55:58 web kernel: [92118.674929]  [<ffffffff811231a0>] ?
> perf_read+0x2f0/0x2f0
> > Sep 18 10:55:58 web kernel: [92118.674936]  [<ffffffff81024689>]
> x86_pmu_read+0x9/0x10
> > Sep 18 10:55:58 web kernel: [92118.674942]  [<ffffffff811232a6>]
> __perf_event_read+0x106/0x110
> > Sep 18 10:55:58 web kernel: [92118.674951]  [<ffffffff810b9987>]
> smp_call_function_single+0x147/0x170
> > Sep 18 10:55:58 web kernel: [92118.674959]  [<ffffffff811240d0>] ?
> perf_mmap+0x2f0/0x2f0
> > Sep 18 10:55:58 web kernel: [92118.674966]  [<ffffffff81122dda>]
> perf_event_read+0x10a/0x110
> > Sep 18 10:55:58 web kernel: [92118.674972]  [<ffffffff811240d0>] ?
> perf_mmap+0x2f0/0x2f0
> > Sep 18 10:55:58 web kernel: [92118.674979]  [<ffffffff811240dd>]
> perf_event_reset+0xd/0x20
> > Sep 18 10:55:58 web kernel: [92118.674987]  [<ffffffff8111ff08>]
> perf_event_for_each_child+0x38/0xa0
> > Sep 18 10:55:58 web kernel: [92118.674994]  [<ffffffff811240d0>] ?
> perf_mmap+0x2f0/0x2f0
> > Sep 18 10:55:58 web kernel: [92118.675001]  [<ffffffff8112255a>]
> perf_ioctl+0xba/0x340
> > Sep 18 10:55:58 web kernel: [92118.675009]  [<ffffffff811b1885>] ?
> fd_install+0x25/0x30
> > Sep 18 10:55:58 web kernel: [92118.675016]  [<ffffffff811a60e9>]
> do_vfs_ioctl+0x99/0x570
> > Sep 18 10:55:58 web kernel: [92118.675023]  [<ffffffff811a6651>]
> sys_ioctl+0x91/0xb0
> > Sep 18 10:55:58 web kernel: [92118.675031]  [<ffffffff816d575d>]
> system_call_fastpath+0x1a/0x1f
> > Sep 18 10:55:58 web kernel: [92118.675036] Code: 00 00 00 00 00 55
> 89 f9 48 89 e5 0f 32 31 ff 89 c0 48 c1 e2 20 89 3e 48 09 c2 48 89 d0
> 5d c3 66 0f 1f 44 00 00 55 89 f9 48 89 e5 <0f> 33 89 c0 48 c1 e2 20 48
> 09 c2 48 89 d0 5d c3 66 2e 0f 1f 84
> > Sep 18 10:55:58 web kernel: [92118.675103] RIP  [<ffffffff81003046>]
> native_read_pmc+0x6/0x20
> > Sep 18 10:55:58 web kernel: [92118.675110]  RSP <ffff8800026b9d20>
> > Sep 18 10:55:58 web kernel: [92118.675118] ---[ end trace
> 1a73231ba5f74716 ]---
> > 
> 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [BUG] hhvm running on Ubuntu 13.04 with Xen Hypervisor - linux kernel panic
  2013-09-20 20:09 ` Konrad Rzeszutek Wilk
@ 2013-09-30  9:01   ` Craig Carnell
  0 siblings, 0 replies; 15+ messages in thread
From: Craig Carnell @ 2013-09-30  9:01 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: xen-devel@lists.xen.org

Am I supposed to give perf some options? :) Apologies!

usr/local/src/dev/hiphop-php/hphp/hhvm$ perf

 usage: perf [--version] [--help] COMMAND [ARGS]

 The most commonly used perf commands are:
   annotate        Read perf.data (created by perf record) and display
annotated code
   archive         Create archive with object files with build-ids found
in perf.data file
   bench           General framework for benchmark suites
   buildid-cache   Manage build-id cache.
   buildid-list    List the buildids in a perf.data file
   diff            Read two perf.data files and display the differential
profile
   evlist          List the event names in a perf.data file
   inject          Filter to augment the events stream with additional
information
   kmem            Tool to trace/measure kernel memory(slab) properties
   kvm             Tool to trace/measure kvm guest os
   list            List all symbolic event types
   lock            Analyze lock events
   record          Run a command and record its profile into perf.data
   report          Read perf.data (created by perf record) and display the
profile
   sched           Tool to trace/measure scheduler properties (latencies)
   script          Read perf.data (created by perf record) and display
trace output
   stat            Run a command and gather performance counter statistics
   test            Runs sanity tests.
   timechart       Tool to visualize total system behavior during a
workload
   top             System profiling tool.
   trace           strace inspired tool
   probe           Define new dynamic tracepoints

 See 'perf help COMMAND' for more information on a specific command.



On 20/09/2013 21:09, "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
wrote:

>On Wed, Sep 18, 2013 at 11:21:18AM +0000, Craig Carnell wrote:
>> Hi,
>> 
>> I am trying out hiphop vm (the php just in time compiler). My setup is
>>a Rackspace Cloud Server running Ubuntu 13.04 with kernel
>>3.8.0-30-generic #44-Ubuntu SMP Thu Aug 22 20:52:24 UTC 2013 x86_64
>>x86_64 x86_64 GNU/Linux
>> 
>> The cloud server uses Xen Hypervisor.
>> 
>> Hiphopvm is compiled from source using the github repo. When running
>>hhvm from the command line (without any options or php application) the
>>system immediately crashes, throwing linux into a kernel panic and thus
>>death.
>> 
>
>And what happens if you run 'perf' by itself?
>
>
>> I have reported this issue on hiphop github issue page:
>> 
>> https://github.com/facebook/hiphop-php/issues/1065
>> 
>> I am not sure if this is a linux kernel bug or a xen hypervisor bug:
>> 
>> The output of /var/log/syslog:
>> 
>> Sep 18 10:55:58 web kernel: [92118.674736] general protection fault:
>>0000 [#1] SMP
>> Sep 18 10:55:58 web kernel: [92118.674754] Modules linked in: xenfs(F)
>>xen_privcmd(F) xt_tcpudp(F) nf_conntrack_ipv4(F) nf_defrag_ipv4(F)
>>xt_state(F) nf_conntrack(F) xt_comment(F) iptable_filter(F) ip_tables(F)
>>x_tables(F) microcode(F) lp(F) parport(F)
>> Sep 18 10:55:58 web kernel: [92118.674781] CPU 0
>> Sep 18 10:55:58 web kernel: [92118.674787] Pid: 5020, comm: hhvm
>>Tainted: GF            3.8.0-30-generic #44-Ubuntu
>> Sep 18 10:55:58 web kernel: [92118.674795] RIP:
>>e030:[<ffffffff81003046>]  [<ffffffff81003046>] native_read_pmc+0x6/0x20
>> Sep 18 10:55:58 web kernel: [92118.674809] RSP: e02b:ffff8800026b9d20
>>EFLAGS: 00010083
>> Sep 18 10:55:58 web kernel: [92118.674814] RAX: ffffffff81c1bd80 RBX:
>>0000000000000000 RCX: 0000000000000000
>> Sep 18 10:55:58 web kernel: [92118.674819] RDX: 0000000000005f6c RSI:
>>ffff8800f7c81900 RDI: 0000000000000000
>> Sep 18 10:55:58 web kernel: [92118.674824] RBP: ffff8800026b9d20 R08:
>>00000000000337d8 R09: ffff8800e933dcc0
>> Sep 18 10:55:58 web kernel: [92118.674830] R10: 00007fff2d3caea0 R11:
>>0000000000000246 R12: ffff8800f87ecc00
>> Sep 18 10:55:58 web kernel: [92118.674835] R13: ffff800000000001 R14:
>>ffff8800f87ecd70 R15: 0000000000000010
>> Sep 18 10:55:58 web kernel: [92118.674844] FS:  00007f43d4c9b180(0000)
>>GS:ffff8800ffc00000(0000) knlGS:0000000000000000
>> Sep 18 10:55:58 web kernel: [92118.674850] CS:  e033 DS: 0000 ES: 0000
>>CR0: 000000008005003b
>> Sep 18 10:55:58 web kernel: [92118.674855] CR2: 000000000105ebc0 CR3:
>>00000000025cd000 CR4: 0000000000000660
>> Sep 18 10:55:58 web kernel: [92118.674861] DR0: 0000000000000000 DR1:
>>0000000000000000 DR2: 0000000000000000
>> Sep 18 10:55:58 web kernel: [92118.674867] DR3: 0000000000000000 DR6:
>>00000000ffff0ff0 DR7: 0000000000000400
>> Sep 18 10:55:58 web kernel: [92118.674872] Process hhvm (pid: 5020,
>>threadinfo ffff8800026b8000, task ffff8800f7cfc5c0)
>> Sep 18 10:55:58 web kernel: [92118.674879] Stack:
>> Sep 18 10:55:58 web kernel: [92118.674882]  ffff8800026b9d58
>>ffffffff81024625 0000000000000000 ffff8800f87ecc00
>> Sep 18 10:55:58 web kernel: [92118.674893]  ffff8800f7c8190c
>>ffffffff811231a0 0000000000000005 ffff8800026b9d68
>> Sep 18 10:55:58 web kernel: [92118.674902]  ffffffff81024689
>>ffff8800026b9d90 ffffffff811232a6 00000000ffff02ff
>> Sep 18 10:55:58 web kernel: [92118.674911] Call Trace:
>> Sep 18 10:55:58 web kernel: [92118.674920]  [<ffffffff81024625>]
>>x86_perf_event_update+0x55/0xb0
>> Sep 18 10:55:58 web kernel: [92118.674929]  [<ffffffff811231a0>] ?
>>perf_read+0x2f0/0x2f0
>> Sep 18 10:55:58 web kernel: [92118.674936]  [<ffffffff81024689>]
>>x86_pmu_read+0x9/0x10
>> Sep 18 10:55:58 web kernel: [92118.674942]  [<ffffffff811232a6>]
>>__perf_event_read+0x106/0x110
>> Sep 18 10:55:58 web kernel: [92118.674951]  [<ffffffff810b9987>]
>>smp_call_function_single+0x147/0x170
>> Sep 18 10:55:58 web kernel: [92118.674959]  [<ffffffff811240d0>] ?
>>perf_mmap+0x2f0/0x2f0
>> Sep 18 10:55:58 web kernel: [92118.674966]  [<ffffffff81122dda>]
>>perf_event_read+0x10a/0x110
>> Sep 18 10:55:58 web kernel: [92118.674972]  [<ffffffff811240d0>] ?
>>perf_mmap+0x2f0/0x2f0
>> Sep 18 10:55:58 web kernel: [92118.674979]  [<ffffffff811240dd>]
>>perf_event_reset+0xd/0x20
>> Sep 18 10:55:58 web kernel: [92118.674987]  [<ffffffff8111ff08>]
>>perf_event_for_each_child+0x38/0xa0
>> Sep 18 10:55:58 web kernel: [92118.674994]  [<ffffffff811240d0>] ?
>>perf_mmap+0x2f0/0x2f0
>> Sep 18 10:55:58 web kernel: [92118.675001]  [<ffffffff8112255a>]
>>perf_ioctl+0xba/0x340
>> Sep 18 10:55:58 web kernel: [92118.675009]  [<ffffffff811b1885>] ?
>>fd_install+0x25/0x30
>> Sep 18 10:55:58 web kernel: [92118.675016]  [<ffffffff811a60e9>]
>>do_vfs_ioctl+0x99/0x570
>> Sep 18 10:55:58 web kernel: [92118.675023]  [<ffffffff811a6651>]
>>sys_ioctl+0x91/0xb0
>> Sep 18 10:55:58 web kernel: [92118.675031]  [<ffffffff816d575d>]
>>system_call_fastpath+0x1a/0x1f
>> Sep 18 10:55:58 web kernel: [92118.675036] Code: 00 00 00 00 00 55 89
>>f9 48 89 e5 0f 32 31 ff 89 c0 48 c1 e2 20 89 3e 48 09 c2 48 89 d0 5d c3
>>66 0f 1f 44 00 00 55 89 f9 48 89 e5 <0f> 33 89 c0 48 c1 e2 20 48 09 c2
>>48 89 d0 5d c3 66 2e 0f 1f 84
>> Sep 18 10:55:58 web kernel: [92118.675103] RIP  [<ffffffff81003046>]
>>native_read_pmc+0x6/0x20
>> Sep 18 10:55:58 web kernel: [92118.675110]  RSP <ffff8800026b9d20>
>> Sep 18 10:55:58 web kernel: [92118.675118] ---[ end trace
>>1a73231ba5f74716 ]---
>> 
>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>
>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [BUG] hhvm running on Ubuntu 13.04 with Xen Hypervisor - linux kernel panic
  2013-09-22  0:08 [BUG] hhvm running on Ubuntu 13.04 with Xen Hypervisor - linux kernel panic Boris Ostrovsky
@ 2013-09-30 11:02 ` Craig Carnell
  2013-09-30 21:35   ` Boris Ostrovsky
  0 siblings, 1 reply; 15+ messages in thread
From: Craig Carnell @ 2013-09-30 11:02 UTC (permalink / raw)
  To: Boris Ostrovsky, konrad.wilk@oracle.com; +Cc: xen-devel@lists.xen.org

Not sure what you mean by post leaf for an AMD processor, what is the
command? (sorry just a dumb PHP developer here!)

Here the output you requested from dmesg:

dmesg | grep -i perf

[    0.004000] Initializing cgroup subsys perf_event
[    0.064156] Performance Events:

Sorry if it's not more helpful!

Craig.



On 22/09/2013 01:08, "Boris Ostrovsky" <boris.ostrovsky@oracle.com> wrote:

>
>----- konrad.wilk@oracle.com wrote:
>
>> On Wed, Sep 18, 2013 at 11:21:18AM +0000, Craig Carnell wrote:
>> > Hi,
>> > 
>> > I am trying out hiphop vm (the php just in time compiler). My setup
>> is a Rackspace Cloud Server running Ubuntu 13.04 with kernel
>> 3.8.0-30-generic #44-Ubuntu SMP Thu Aug 22 20:52:24 UTC 2013 x86_64
>> x86_64 x86_64 GNU/Linux
>> > 
>> > The cloud server uses Xen Hypervisor.
>> > 
>> > Hiphopvm is compiled from source using the github repo. When running
>> hhvm from the command line (without any options or php application)
>> the system immediately crashes, throwing linux into a kernel panic and
>> thus death.
>> > 
>> 
>> And what happens if you run 'perf' by itself?
>> 
>> 
>> > I have reported this issue on hiphop github issue page:
>> > 
>> > https://github.com/facebook/hiphop-php/issues/1065
>> > 
>> > I am not sure if this is a linux kernel bug or a xen hypervisor
>> bug:
>> > 
>> > The output of /var/log/syslog:
>> > 
>> > Sep 18 10:55:58 web kernel: [92118.674736] general protection fault:
>> 0000 [#1] SMP
>> > Sep 18 10:55:58 web kernel: [92118.674754] Modules linked in:
>> xenfs(F) xen_privcmd(F) xt_tcpudp(F) nf_conntrack_ipv4(F)
>> nf_defrag_ipv4(F) xt_state(F) nf_conntrack(F) xt_comment(F)
>> iptable_filter(F) ip_tables(F) x_tables(F) microcode(F) lp(F)
>> parport(F)
>> > Sep 18 10:55:58 web kernel: [92118.674781] CPU 0
>> > Sep 18 10:55:58 web kernel: [92118.674787] Pid: 5020, comm: hhvm
>> Tainted: GF            3.8.0-30-generic #44-Ubuntu
>> > Sep 18 10:55:58 web kernel: [92118.674795] RIP:
>> e030:[<ffffffff81003046>]  [<ffffffff81003046>]
>> native_read_pmc+0x6/0x20
>
>
>The link above seems to imply that this is a PV guest. RDPMC instruction
>is not currently emulated which would cause a #GP to the guest.
>
>I suspect that hhvm may be assuming that performance counters exist and
>this
>is not always the case.
>
>Can you post CPUID leaf 0xa if this is Intel processor and leaf 0x80000001
>if this is AMD (from the guest)? And 'dmesg | grep -i perf'.
>
>-boris
>
>
>> > Sep 18 10:55:58 web kernel: [92118.674809] RSP:
>> e02b:ffff8800026b9d20  EFLAGS: 00010083
>> > Sep 18 10:55:58 web kernel: [92118.674814] RAX: ffffffff81c1bd80
>> RBX: 0000000000000000 RCX: 0000000000000000
>> > Sep 18 10:55:58 web kernel: [92118.674819] RDX: 0000000000005f6c
>> RSI: ffff8800f7c81900 RDI: 0000000000000000
>> > Sep 18 10:55:58 web kernel: [92118.674824] RBP: ffff8800026b9d20
>> R08: 00000000000337d8 R09: ffff8800e933dcc0
>> > Sep 18 10:55:58 web kernel: [92118.674830] R10: 00007fff2d3caea0
>> R11: 0000000000000246 R12: ffff8800f87ecc00
>> > Sep 18 10:55:58 web kernel: [92118.674835] R13: ffff800000000001
>> R14: ffff8800f87ecd70 R15: 0000000000000010
>> > Sep 18 10:55:58 web kernel: [92118.674844] FS:
>> 00007f43d4c9b180(0000) GS:ffff8800ffc00000(0000)
>> knlGS:0000000000000000
>> > Sep 18 10:55:58 web kernel: [92118.674850] CS:  e033 DS: 0000 ES:
>> 0000 CR0: 000000008005003b
>> > Sep 18 10:55:58 web kernel: [92118.674855] CR2: 000000000105ebc0
>> CR3: 00000000025cd000 CR4: 0000000000000660
>> > Sep 18 10:55:58 web kernel: [92118.674861] DR0: 0000000000000000
>> DR1: 0000000000000000 DR2: 0000000000000000
>> > Sep 18 10:55:58 web kernel: [92118.674867] DR3: 0000000000000000
>> DR6: 00000000ffff0ff0 DR7: 0000000000000400
>> > Sep 18 10:55:58 web kernel: [92118.674872] Process hhvm (pid: 5020,
>> threadinfo ffff8800026b8000, task ffff8800f7cfc5c0)
>> > Sep 18 10:55:58 web kernel: [92118.674879] Stack:
>> > Sep 18 10:55:58 web kernel: [92118.674882]  ffff8800026b9d58
>> ffffffff81024625 0000000000000000 ffff8800f87ecc00
>> > Sep 18 10:55:58 web kernel: [92118.674893]  ffff8800f7c8190c
>> ffffffff811231a0 0000000000000005 ffff8800026b9d68
>> > Sep 18 10:55:58 web kernel: [92118.674902]  ffffffff81024689
>> ffff8800026b9d90 ffffffff811232a6 00000000ffff02ff
>> > Sep 18 10:55:58 web kernel: [92118.674911] Call Trace:
>> > Sep 18 10:55:58 web kernel: [92118.674920]  [<ffffffff81024625>]
>> x86_perf_event_update+0x55/0xb0
>> > Sep 18 10:55:58 web kernel: [92118.674929]  [<ffffffff811231a0>] ?
>> perf_read+0x2f0/0x2f0
>> > Sep 18 10:55:58 web kernel: [92118.674936]  [<ffffffff81024689>]
>> x86_pmu_read+0x9/0x10
>> > Sep 18 10:55:58 web kernel: [92118.674942]  [<ffffffff811232a6>]
>> __perf_event_read+0x106/0x110
>> > Sep 18 10:55:58 web kernel: [92118.674951]  [<ffffffff810b9987>]
>> smp_call_function_single+0x147/0x170
>> > Sep 18 10:55:58 web kernel: [92118.674959]  [<ffffffff811240d0>] ?
>> perf_mmap+0x2f0/0x2f0
>> > Sep 18 10:55:58 web kernel: [92118.674966]  [<ffffffff81122dda>]
>> perf_event_read+0x10a/0x110
>> > Sep 18 10:55:58 web kernel: [92118.674972]  [<ffffffff811240d0>] ?
>> perf_mmap+0x2f0/0x2f0
>> > Sep 18 10:55:58 web kernel: [92118.674979]  [<ffffffff811240dd>]
>> perf_event_reset+0xd/0x20
>> > Sep 18 10:55:58 web kernel: [92118.674987]  [<ffffffff8111ff08>]
>> perf_event_for_each_child+0x38/0xa0
>> > Sep 18 10:55:58 web kernel: [92118.674994]  [<ffffffff811240d0>] ?
>> perf_mmap+0x2f0/0x2f0
>> > Sep 18 10:55:58 web kernel: [92118.675001]  [<ffffffff8112255a>]
>> perf_ioctl+0xba/0x340
>> > Sep 18 10:55:58 web kernel: [92118.675009]  [<ffffffff811b1885>] ?
>> fd_install+0x25/0x30
>> > Sep 18 10:55:58 web kernel: [92118.675016]  [<ffffffff811a60e9>]
>> do_vfs_ioctl+0x99/0x570
>> > Sep 18 10:55:58 web kernel: [92118.675023]  [<ffffffff811a6651>]
>> sys_ioctl+0x91/0xb0
>> > Sep 18 10:55:58 web kernel: [92118.675031]  [<ffffffff816d575d>]
>> system_call_fastpath+0x1a/0x1f
>> > Sep 18 10:55:58 web kernel: [92118.675036] Code: 00 00 00 00 00 55
>> 89 f9 48 89 e5 0f 32 31 ff 89 c0 48 c1 e2 20 89 3e 48 09 c2 48 89 d0
>> 5d c3 66 0f 1f 44 00 00 55 89 f9 48 89 e5 <0f> 33 89 c0 48 c1 e2 20 48
>> 09 c2 48 89 d0 5d c3 66 2e 0f 1f 84
>> > Sep 18 10:55:58 web kernel: [92118.675103] RIP  [<ffffffff81003046>]
>> native_read_pmc+0x6/0x20
>> > Sep 18 10:55:58 web kernel: [92118.675110]  RSP <ffff8800026b9d20>
>> > Sep 18 10:55:58 web kernel: [92118.675118] ---[ end trace
>> 1a73231ba5f74716 ]---
>> > 
>> 
>> > _______________________________________________
>> > Xen-devel mailing list
>> > Xen-devel@lists.xen.org
>> > http://lists.xen.org/xen-devel
>> 
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [BUG] hhvm running on Ubuntu 13.04 with Xen Hypervisor - linux kernel panic
  2013-09-30 11:02 ` Craig Carnell
@ 2013-09-30 21:35   ` Boris Ostrovsky
  0 siblings, 0 replies; 15+ messages in thread
From: Boris Ostrovsky @ 2013-09-30 21:35 UTC (permalink / raw)
  To: Craig Carnell; +Cc: xen-devel@lists.xen.org

On 09/30/2013 07:02 AM, Craig Carnell wrote:
> Not sure what you mean by post leaf for an AMD processor, what is the
> command? (sorry just a dumb PHP developer here!)

Actually, since this is an AMD processor CPUID won't help here.

>
> Here the output you requested from dmesg:
>
> dmesg | grep -i perf
>
> [    0.004000] Initializing cgroup subsys perf_event
> [    0.064156] Performance Events:

I'd expect something like
root@orochi-c> dmesg |grep -i perf
[    0.006473] Initializing cgroup subsys perf_event
[    0.053000] Performance Events: Fam15h core perfctr, Broken PMU 
hardware detected, using software events only.
[    0.054010] Failed to access perfctr msr (MSR c0010201 is 0)
root@orochi-c>


BTW, I was able to build and run hhvm on a PV guest (and I assume you 
are running a PV guest):

     root@orochi-c> ./hphp/hhvm/hhvm -m s
     mapping self...
     mapping self took 0'00" (42810 us) wall time
     loading static content...
     searching all files under source root...
     analyzing 31428 files under source root...
     loaded 0 bytes of static content in total
     loading static content took 0'00" (117386 us) wall time
     page server started
     all servers started

I don't know Rackspace UI so maybe you can't do this but it would be 
useful to see Xen configuration file for your guest. And Xen version, 
boot options and such (output of 'xm info', for example)

-boris


>
> Sorry if it's not more helpful!
>
> Craig.
>
>
>
> On 22/09/2013 01:08, "Boris Ostrovsky" <boris.ostrovsky@oracle.com> wrote:
>
>> ----- konrad.wilk@oracle.com wrote:
>>
>>> On Wed, Sep 18, 2013 at 11:21:18AM +0000, Craig Carnell wrote:
>>>> Hi,
>>>>
>>>> I am trying out hiphop vm (the php just in time compiler). My setup
>>> is a Rackspace Cloud Server running Ubuntu 13.04 with kernel
>>> 3.8.0-30-generic #44-Ubuntu SMP Thu Aug 22 20:52:24 UTC 2013 x86_64
>>> x86_64 x86_64 GNU/Linux
>>>> The cloud server uses Xen Hypervisor.
>>>>
>>>> Hiphopvm is compiled from source using the github repo. When running
>>> hhvm from the command line (without any options or php application)
>>> the system immediately crashes, throwing linux into a kernel panic and
>>> thus death.
>>> And what happens if you run 'perf' by itself?
>>>
>>>
>>>> I have reported this issue on hiphop github issue page:
>>>>
>>>> https://github.com/facebook/hiphop-php/issues/1065
>>>>
>>>> I am not sure if this is a linux kernel bug or a xen hypervisor
>>> bug:
>>>> The output of /var/log/syslog:
>>>>
>>>> Sep 18 10:55:58 web kernel: [92118.674736] general protection fault:
>>> 0000 [#1] SMP
>>>> Sep 18 10:55:58 web kernel: [92118.674754] Modules linked in:
>>> xenfs(F) xen_privcmd(F) xt_tcpudp(F) nf_conntrack_ipv4(F)
>>> nf_defrag_ipv4(F) xt_state(F) nf_conntrack(F) xt_comment(F)
>>> iptable_filter(F) ip_tables(F) x_tables(F) microcode(F) lp(F)
>>> parport(F)
>>>> Sep 18 10:55:58 web kernel: [92118.674781] CPU 0
>>>> Sep 18 10:55:58 web kernel: [92118.674787] Pid: 5020, comm: hhvm
>>> Tainted: GF            3.8.0-30-generic #44-Ubuntu
>>>> Sep 18 10:55:58 web kernel: [92118.674795] RIP:
>>> e030:[<ffffffff81003046>]  [<ffffffff81003046>]
>>> native_read_pmc+0x6/0x20
>>
>> The link above seems to imply that this is a PV guest. RDPMC instruction
>> is not currently emulated which would cause a #GP to the guest.
>>
>> I suspect that hhvm may be assuming that performance counters exist and
>> this
>> is not always the case.
>>
>> Can you post CPUID leaf 0xa if this is Intel processor and leaf 0x80000001
>> if this is AMD (from the guest)? And 'dmesg | grep -i perf'.
>>
>> -boris
>>
>>
>>>> Sep 18 10:55:58 web kernel: [92118.674809] RSP:
>>> e02b:ffff8800026b9d20  EFLAGS: 00010083
>>>> Sep 18 10:55:58 web kernel: [92118.674814] RAX: ffffffff81c1bd80
>>> RBX: 0000000000000000 RCX: 0000000000000000
>>>> Sep 18 10:55:58 web kernel: [92118.674819] RDX: 0000000000005f6c
>>> RSI: ffff8800f7c81900 RDI: 0000000000000000
>>>> Sep 18 10:55:58 web kernel: [92118.674824] RBP: ffff8800026b9d20
>>> R08: 00000000000337d8 R09: ffff8800e933dcc0
>>>> Sep 18 10:55:58 web kernel: [92118.674830] R10: 00007fff2d3caea0
>>> R11: 0000000000000246 R12: ffff8800f87ecc00
>>>> Sep 18 10:55:58 web kernel: [92118.674835] R13: ffff800000000001
>>> R14: ffff8800f87ecd70 R15: 0000000000000010
>>>> Sep 18 10:55:58 web kernel: [92118.674844] FS:
>>> 00007f43d4c9b180(0000) GS:ffff8800ffc00000(0000)
>>> knlGS:0000000000000000
>>>> Sep 18 10:55:58 web kernel: [92118.674850] CS:  e033 DS: 0000 ES:
>>> 0000 CR0: 000000008005003b
>>>> Sep 18 10:55:58 web kernel: [92118.674855] CR2: 000000000105ebc0
>>> CR3: 00000000025cd000 CR4: 0000000000000660
>>>> Sep 18 10:55:58 web kernel: [92118.674861] DR0: 0000000000000000
>>> DR1: 0000000000000000 DR2: 0000000000000000
>>>> Sep 18 10:55:58 web kernel: [92118.674867] DR3: 0000000000000000
>>> DR6: 00000000ffff0ff0 DR7: 0000000000000400
>>>> Sep 18 10:55:58 web kernel: [92118.674872] Process hhvm (pid: 5020,
>>> threadinfo ffff8800026b8000, task ffff8800f7cfc5c0)
>>>> Sep 18 10:55:58 web kernel: [92118.674879] Stack:
>>>> Sep 18 10:55:58 web kernel: [92118.674882]  ffff8800026b9d58
>>> ffffffff81024625 0000000000000000 ffff8800f87ecc00
>>>> Sep 18 10:55:58 web kernel: [92118.674893]  ffff8800f7c8190c
>>> ffffffff811231a0 0000000000000005 ffff8800026b9d68
>>>> Sep 18 10:55:58 web kernel: [92118.674902]  ffffffff81024689
>>> ffff8800026b9d90 ffffffff811232a6 00000000ffff02ff
>>>> Sep 18 10:55:58 web kernel: [92118.674911] Call Trace:
>>>> Sep 18 10:55:58 web kernel: [92118.674920]  [<ffffffff81024625>]
>>> x86_perf_event_update+0x55/0xb0
>>>> Sep 18 10:55:58 web kernel: [92118.674929]  [<ffffffff811231a0>] ?
>>> perf_read+0x2f0/0x2f0
>>>> Sep 18 10:55:58 web kernel: [92118.674936]  [<ffffffff81024689>]
>>> x86_pmu_read+0x9/0x10
>>>> Sep 18 10:55:58 web kernel: [92118.674942]  [<ffffffff811232a6>]
>>> __perf_event_read+0x106/0x110
>>>> Sep 18 10:55:58 web kernel: [92118.674951]  [<ffffffff810b9987>]
>>> smp_call_function_single+0x147/0x170
>>>> Sep 18 10:55:58 web kernel: [92118.674959]  [<ffffffff811240d0>] ?
>>> perf_mmap+0x2f0/0x2f0
>>>> Sep 18 10:55:58 web kernel: [92118.674966]  [<ffffffff81122dda>]
>>> perf_event_read+0x10a/0x110
>>>> Sep 18 10:55:58 web kernel: [92118.674972]  [<ffffffff811240d0>] ?
>>> perf_mmap+0x2f0/0x2f0
>>>> Sep 18 10:55:58 web kernel: [92118.674979]  [<ffffffff811240dd>]
>>> perf_event_reset+0xd/0x20
>>>> Sep 18 10:55:58 web kernel: [92118.674987]  [<ffffffff8111ff08>]
>>> perf_event_for_each_child+0x38/0xa0
>>>> Sep 18 10:55:58 web kernel: [92118.674994]  [<ffffffff811240d0>] ?
>>> perf_mmap+0x2f0/0x2f0
>>>> Sep 18 10:55:58 web kernel: [92118.675001]  [<ffffffff8112255a>]
>>> perf_ioctl+0xba/0x340
>>>> Sep 18 10:55:58 web kernel: [92118.675009]  [<ffffffff811b1885>] ?
>>> fd_install+0x25/0x30
>>>> Sep 18 10:55:58 web kernel: [92118.675016]  [<ffffffff811a60e9>]
>>> do_vfs_ioctl+0x99/0x570
>>>> Sep 18 10:55:58 web kernel: [92118.675023]  [<ffffffff811a6651>]
>>> sys_ioctl+0x91/0xb0
>>>> Sep 18 10:55:58 web kernel: [92118.675031]  [<ffffffff816d575d>]
>>> system_call_fastpath+0x1a/0x1f
>>>> Sep 18 10:55:58 web kernel: [92118.675036] Code: 00 00 00 00 00 55
>>> 89 f9 48 89 e5 0f 32 31 ff 89 c0 48 c1 e2 20 89 3e 48 09 c2 48 89 d0
>>> 5d c3 66 0f 1f 44 00 00 55 89 f9 48 89 e5 <0f> 33 89 c0 48 c1 e2 20 48
>>> 09 c2 48 89 d0 5d c3 66 2e 0f 1f 84
>>>> Sep 18 10:55:58 web kernel: [92118.675103] RIP  [<ffffffff81003046>]
>>> native_read_pmc+0x6/0x20
>>>> Sep 18 10:55:58 web kernel: [92118.675110]  RSP <ffff8800026b9d20>
>>>> Sep 18 10:55:58 web kernel: [92118.675118] ---[ end trace
>>> 1a73231ba5f74716 ]---
>>>> _______________________________________________
>>>> Xen-devel mailing list
>>>> Xen-devel@lists.xen.org
>>>> http://lists.xen.org/xen-devel
>>>
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xen.org
>>> http://lists.xen.org/xen-devel
>

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2013-09-30 21:35 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-09-22  0:08 [BUG] hhvm running on Ubuntu 13.04 with Xen Hypervisor - linux kernel panic Boris Ostrovsky
2013-09-30 11:02 ` Craig Carnell
2013-09-30 21:35   ` Boris Ostrovsky
  -- strict thread matches above, loose matches on Subject: below --
2013-09-18 11:21 Craig Carnell
2013-09-18 11:23 ` Craig Carnell
2013-09-19  9:52 ` Wei Liu
2013-09-19 10:14   ` Craig Carnell
2013-09-19 10:28     ` Wei Liu
2013-09-19 11:51   ` Dietmar Hahn
2013-09-19 15:02     ` Craig Carnell
2013-09-20 12:02       ` Dietmar Hahn
2013-09-20 12:07         ` Craig Carnell
2013-09-20 12:33           ` Dietmar Hahn
2013-09-20 20:09 ` Konrad Rzeszutek Wilk
2013-09-30  9:01   ` Craig Carnell

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).