xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: Craig Carnell <ccarnell@tti-fc.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [BUG] hhvm running on Ubuntu 13.04 with Xen Hypervisor - linux kernel panic
Date: Mon, 30 Sep 2013 17:35:25 -0400	[thread overview]
Message-ID: <5249EE9D.2050009@oracle.com> (raw)
In-Reply-To: <CE6EFC48.49FA%ccarnell@tti-fc.com>

On 09/30/2013 07:02 AM, Craig Carnell wrote:
> Not sure what you mean by post leaf for an AMD processor, what is the
> command? (sorry just a dumb PHP developer here!)

Actually, since this is an AMD processor CPUID won't help here.

>
> Here the output you requested from dmesg:
>
> dmesg | grep -i perf
>
> [    0.004000] Initializing cgroup subsys perf_event
> [    0.064156] Performance Events:

I'd expect something like
root@orochi-c> dmesg |grep -i perf
[    0.006473] Initializing cgroup subsys perf_event
[    0.053000] Performance Events: Fam15h core perfctr, Broken PMU 
hardware detected, using software events only.
[    0.054010] Failed to access perfctr msr (MSR c0010201 is 0)
root@orochi-c>


BTW, I was able to build and run hhvm on a PV guest (and I assume you 
are running a PV guest):

     root@orochi-c> ./hphp/hhvm/hhvm -m s
     mapping self...
     mapping self took 0'00" (42810 us) wall time
     loading static content...
     searching all files under source root...
     analyzing 31428 files under source root...
     loaded 0 bytes of static content in total
     loading static content took 0'00" (117386 us) wall time
     page server started
     all servers started

I don't know Rackspace UI so maybe you can't do this but it would be 
useful to see Xen configuration file for your guest. And Xen version, 
boot options and such (output of 'xm info', for example)

-boris


>
> Sorry if it's not more helpful!
>
> Craig.
>
>
>
> On 22/09/2013 01:08, "Boris Ostrovsky" <boris.ostrovsky@oracle.com> wrote:
>
>> ----- konrad.wilk@oracle.com wrote:
>>
>>> On Wed, Sep 18, 2013 at 11:21:18AM +0000, Craig Carnell wrote:
>>>> Hi,
>>>>
>>>> I am trying out hiphop vm (the php just in time compiler). My setup
>>> is a Rackspace Cloud Server running Ubuntu 13.04 with kernel
>>> 3.8.0-30-generic #44-Ubuntu SMP Thu Aug 22 20:52:24 UTC 2013 x86_64
>>> x86_64 x86_64 GNU/Linux
>>>> The cloud server uses Xen Hypervisor.
>>>>
>>>> Hiphopvm is compiled from source using the github repo. When running
>>> hhvm from the command line (without any options or php application)
>>> the system immediately crashes, throwing linux into a kernel panic and
>>> thus death.
>>> And what happens if you run 'perf' by itself?
>>>
>>>
>>>> I have reported this issue on hiphop github issue page:
>>>>
>>>> https://github.com/facebook/hiphop-php/issues/1065
>>>>
>>>> I am not sure if this is a linux kernel bug or a xen hypervisor
>>> bug:
>>>> The output of /var/log/syslog:
>>>>
>>>> Sep 18 10:55:58 web kernel: [92118.674736] general protection fault:
>>> 0000 [#1] SMP
>>>> Sep 18 10:55:58 web kernel: [92118.674754] Modules linked in:
>>> xenfs(F) xen_privcmd(F) xt_tcpudp(F) nf_conntrack_ipv4(F)
>>> nf_defrag_ipv4(F) xt_state(F) nf_conntrack(F) xt_comment(F)
>>> iptable_filter(F) ip_tables(F) x_tables(F) microcode(F) lp(F)
>>> parport(F)
>>>> Sep 18 10:55:58 web kernel: [92118.674781] CPU 0
>>>> Sep 18 10:55:58 web kernel: [92118.674787] Pid: 5020, comm: hhvm
>>> Tainted: GF            3.8.0-30-generic #44-Ubuntu
>>>> Sep 18 10:55:58 web kernel: [92118.674795] RIP:
>>> e030:[<ffffffff81003046>]  [<ffffffff81003046>]
>>> native_read_pmc+0x6/0x20
>>
>> The link above seems to imply that this is a PV guest. RDPMC instruction
>> is not currently emulated which would cause a #GP to the guest.
>>
>> I suspect that hhvm may be assuming that performance counters exist and
>> this
>> is not always the case.
>>
>> Can you post CPUID leaf 0xa if this is Intel processor and leaf 0x80000001
>> if this is AMD (from the guest)? And 'dmesg | grep -i perf'.
>>
>> -boris
>>
>>
>>>> Sep 18 10:55:58 web kernel: [92118.674809] RSP:
>>> e02b:ffff8800026b9d20  EFLAGS: 00010083
>>>> Sep 18 10:55:58 web kernel: [92118.674814] RAX: ffffffff81c1bd80
>>> RBX: 0000000000000000 RCX: 0000000000000000
>>>> Sep 18 10:55:58 web kernel: [92118.674819] RDX: 0000000000005f6c
>>> RSI: ffff8800f7c81900 RDI: 0000000000000000
>>>> Sep 18 10:55:58 web kernel: [92118.674824] RBP: ffff8800026b9d20
>>> R08: 00000000000337d8 R09: ffff8800e933dcc0
>>>> Sep 18 10:55:58 web kernel: [92118.674830] R10: 00007fff2d3caea0
>>> R11: 0000000000000246 R12: ffff8800f87ecc00
>>>> Sep 18 10:55:58 web kernel: [92118.674835] R13: ffff800000000001
>>> R14: ffff8800f87ecd70 R15: 0000000000000010
>>>> Sep 18 10:55:58 web kernel: [92118.674844] FS:
>>> 00007f43d4c9b180(0000) GS:ffff8800ffc00000(0000)
>>> knlGS:0000000000000000
>>>> Sep 18 10:55:58 web kernel: [92118.674850] CS:  e033 DS: 0000 ES:
>>> 0000 CR0: 000000008005003b
>>>> Sep 18 10:55:58 web kernel: [92118.674855] CR2: 000000000105ebc0
>>> CR3: 00000000025cd000 CR4: 0000000000000660
>>>> Sep 18 10:55:58 web kernel: [92118.674861] DR0: 0000000000000000
>>> DR1: 0000000000000000 DR2: 0000000000000000
>>>> Sep 18 10:55:58 web kernel: [92118.674867] DR3: 0000000000000000
>>> DR6: 00000000ffff0ff0 DR7: 0000000000000400
>>>> Sep 18 10:55:58 web kernel: [92118.674872] Process hhvm (pid: 5020,
>>> threadinfo ffff8800026b8000, task ffff8800f7cfc5c0)
>>>> Sep 18 10:55:58 web kernel: [92118.674879] Stack:
>>>> Sep 18 10:55:58 web kernel: [92118.674882]  ffff8800026b9d58
>>> ffffffff81024625 0000000000000000 ffff8800f87ecc00
>>>> Sep 18 10:55:58 web kernel: [92118.674893]  ffff8800f7c8190c
>>> ffffffff811231a0 0000000000000005 ffff8800026b9d68
>>>> Sep 18 10:55:58 web kernel: [92118.674902]  ffffffff81024689
>>> ffff8800026b9d90 ffffffff811232a6 00000000ffff02ff
>>>> Sep 18 10:55:58 web kernel: [92118.674911] Call Trace:
>>>> Sep 18 10:55:58 web kernel: [92118.674920]  [<ffffffff81024625>]
>>> x86_perf_event_update+0x55/0xb0
>>>> Sep 18 10:55:58 web kernel: [92118.674929]  [<ffffffff811231a0>] ?
>>> perf_read+0x2f0/0x2f0
>>>> Sep 18 10:55:58 web kernel: [92118.674936]  [<ffffffff81024689>]
>>> x86_pmu_read+0x9/0x10
>>>> Sep 18 10:55:58 web kernel: [92118.674942]  [<ffffffff811232a6>]
>>> __perf_event_read+0x106/0x110
>>>> Sep 18 10:55:58 web kernel: [92118.674951]  [<ffffffff810b9987>]
>>> smp_call_function_single+0x147/0x170
>>>> Sep 18 10:55:58 web kernel: [92118.674959]  [<ffffffff811240d0>] ?
>>> perf_mmap+0x2f0/0x2f0
>>>> Sep 18 10:55:58 web kernel: [92118.674966]  [<ffffffff81122dda>]
>>> perf_event_read+0x10a/0x110
>>>> Sep 18 10:55:58 web kernel: [92118.674972]  [<ffffffff811240d0>] ?
>>> perf_mmap+0x2f0/0x2f0
>>>> Sep 18 10:55:58 web kernel: [92118.674979]  [<ffffffff811240dd>]
>>> perf_event_reset+0xd/0x20
>>>> Sep 18 10:55:58 web kernel: [92118.674987]  [<ffffffff8111ff08>]
>>> perf_event_for_each_child+0x38/0xa0
>>>> Sep 18 10:55:58 web kernel: [92118.674994]  [<ffffffff811240d0>] ?
>>> perf_mmap+0x2f0/0x2f0
>>>> Sep 18 10:55:58 web kernel: [92118.675001]  [<ffffffff8112255a>]
>>> perf_ioctl+0xba/0x340
>>>> Sep 18 10:55:58 web kernel: [92118.675009]  [<ffffffff811b1885>] ?
>>> fd_install+0x25/0x30
>>>> Sep 18 10:55:58 web kernel: [92118.675016]  [<ffffffff811a60e9>]
>>> do_vfs_ioctl+0x99/0x570
>>>> Sep 18 10:55:58 web kernel: [92118.675023]  [<ffffffff811a6651>]
>>> sys_ioctl+0x91/0xb0
>>>> Sep 18 10:55:58 web kernel: [92118.675031]  [<ffffffff816d575d>]
>>> system_call_fastpath+0x1a/0x1f
>>>> Sep 18 10:55:58 web kernel: [92118.675036] Code: 00 00 00 00 00 55
>>> 89 f9 48 89 e5 0f 32 31 ff 89 c0 48 c1 e2 20 89 3e 48 09 c2 48 89 d0
>>> 5d c3 66 0f 1f 44 00 00 55 89 f9 48 89 e5 <0f> 33 89 c0 48 c1 e2 20 48
>>> 09 c2 48 89 d0 5d c3 66 2e 0f 1f 84
>>>> Sep 18 10:55:58 web kernel: [92118.675103] RIP  [<ffffffff81003046>]
>>> native_read_pmc+0x6/0x20
>>>> Sep 18 10:55:58 web kernel: [92118.675110]  RSP <ffff8800026b9d20>
>>>> Sep 18 10:55:58 web kernel: [92118.675118] ---[ end trace
>>> 1a73231ba5f74716 ]---
>>>> _______________________________________________
>>>> Xen-devel mailing list
>>>> Xen-devel@lists.xen.org
>>>> http://lists.xen.org/xen-devel
>>>
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xen.org
>>> http://lists.xen.org/xen-devel
>

  reply	other threads:[~2013-09-30 21:35 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-09-22  0:08 [BUG] hhvm running on Ubuntu 13.04 with Xen Hypervisor - linux kernel panic Boris Ostrovsky
2013-09-30 11:02 ` Craig Carnell
2013-09-30 21:35   ` Boris Ostrovsky [this message]
  -- strict thread matches above, loose matches on Subject: below --
2013-09-18 11:21 Craig Carnell
2013-09-18 11:23 ` Craig Carnell
2013-09-19  9:52 ` Wei Liu
2013-09-19 10:14   ` Craig Carnell
2013-09-19 10:28     ` Wei Liu
2013-09-19 11:51   ` Dietmar Hahn
2013-09-19 15:02     ` Craig Carnell
2013-09-20 12:02       ` Dietmar Hahn
2013-09-20 12:07         ` Craig Carnell
2013-09-20 12:33           ` Dietmar Hahn
2013-09-20 20:09 ` Konrad Rzeszutek Wilk
2013-09-30  9:01   ` Craig Carnell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5249EE9D.2050009@oracle.com \
    --to=boris.ostrovsky@oracle.com \
    --cc=ccarnell@tti-fc.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).