public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* [ANNOUNCE] kvm-51 release
@ 2007-11-07 17:28 Avi Kivity
       [not found] ` <4731F5B5.1000108-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
  0 siblings, 1 reply; 32+ messages in thread
From: Avi Kivity @ 2007-11-07 17:28 UTC (permalink / raw)
  To: kvm-devel

If you're having trouble on AMD systems, please try this out.

Changes from kvm-50:
- fix some x86 emulator one-byte insns (fixes W2K3 installer again)
- fix host hangs with NMI watchdog on AMD
- fix guest SMP on AMD
- fix dirty page tracking when clearing a guest page (Dor Laor)
- more portability work (Hollis Blanchard, Jerone Young)
- fix FlexPriority with guest smp (Sheng Yang)
- improve rpm specfile (Akio Takebe, me)
- fix external module vs portability work (Andrea Arcangeli)
- remove elpin bios due to license violation
- testsuite shutdown pio port
- don't advertise svm on the guest
- fix reset with kernel apic (Markus Rechberger)


Notes:
      If you use the modules bundled with kvm-51, you can use any version
of Linux from 2.6.9 upwards.
      If you use the modules bundled with Linux 2.6.20, you need to use
kvm-12.
      If you use the modules bundled with Linux 2.6.21, you need to use
kvm-17.
      Modules from Linux 2.6.22 and up will work with any kvm version from
kvm-22.  Some features may only be available in newer releases.
      For best performance, use Linux 2.6.23-rc2 or later as the host.

http://kvm.qumranet.com


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [ANNOUNCE] kvm-51 release
       [not found] ` <4731F5B5.1000108-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-11-07 19:35   ` Haydn Solomon
       [not found]     ` <47321384.8060405-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
  2007-11-09 10:25   ` Farkas Levente
  1 sibling, 1 reply; 32+ messages in thread
From: Haydn Solomon @ 2007-11-07 19:35 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-devel


[-- Attachment #1.1: Type: text/plain, Size: 2205 bytes --]

First , thank you for new release of kvm. I have a few problems to 
report with kvm-51.

1. When running an exisiting winxp ACPI multiprocessor HAL with -smp 2, 
sometimes it will hang on boot.
2. This may not be a major problem but cpu usage is a litte higher when 
idle on release 51 than 50. It was very low on 50, the lowest I've seen 
in a long time.
3. For me personally, the best performing release to date is release 50.

Regards

Haydn

Avi Kivity wrote:
> If you're having trouble on AMD systems, please try this out.
>
> Changes from kvm-50:
> - fix some x86 emulator one-byte insns (fixes W2K3 installer again)
> - fix host hangs with NMI watchdog on AMD
> - fix guest SMP on AMD
> - fix dirty page tracking when clearing a guest page (Dor Laor)
> - more portability work (Hollis Blanchard, Jerone Young)
> - fix FlexPriority with guest smp (Sheng Yang)
> - improve rpm specfile (Akio Takebe, me)
> - fix external module vs portability work (Andrea Arcangeli)
> - remove elpin bios due to license violation
> - testsuite shutdown pio port
> - don't advertise svm on the guest
> - fix reset with kernel apic (Markus Rechberger)
>
>
> Notes:
>       If you use the modules bundled with kvm-51, you can use any version
> of Linux from 2.6.9 upwards.
>       If you use the modules bundled with Linux 2.6.20, you need to use
> kvm-12.
>       If you use the modules bundled with Linux 2.6.21, you need to use
> kvm-17.
>       Modules from Linux 2.6.22 and up will work with any kvm version from
> kvm-22.  Some features may only be available in newer releases.
>       For best performance, use Linux 2.6.23-rc2 or later as the host.
>
> http://kvm.qumranet.com
>
>
> -------------------------------------------------------------------------
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >> http://get.splunk.com/
> _______________________________________________
> kvm-devel mailing list
> kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
> https://lists.sourceforge.net/lists/listinfo/kvm-devel
>   
adfdf

[-- Attachment #1.2: Type: text/html, Size: 3012 bytes --]

[-- Attachment #2: Type: text/plain, Size: 314 bytes --]

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

[-- Attachment #3: Type: text/plain, Size: 186 bytes --]

_______________________________________________
kvm-devel mailing list
kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
https://lists.sourceforge.net/lists/listinfo/kvm-devel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [ANNOUNCE] kvm-51 release
       [not found]     ` <47321384.8060405-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
@ 2007-11-07 19:48       ` Amit Shah
       [not found]         ` <200711080118.46304.amit.shah-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
  2007-11-08  5:51       ` Avi Kivity
  1 sibling, 1 reply; 32+ messages in thread
From: Amit Shah @ 2007-11-07 19:48 UTC (permalink / raw)
  To: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f; +Cc: Avi Kivity

On Thursday 08 November 2007 01:05:32 Haydn Solomon wrote:
> First , thank you for new release of kvm. I have a few problems to
> report with kvm-51.
>
> 1. When running an exisiting winxp ACPI multiprocessor HAL with -smp 2,
> sometimes it will hang on boot.

You mean the guest hangs, right? What's your host system?

> 2. This may not be a major problem but cpu usage is a litte higher when
> idle on release 51 than 50. It was very low on 50, the lowest I've seen
> in a long time.
> 3. For me personally, the best performing release to date is release 50.
>
> Regards
>
> Haydn

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [ANNOUNCE] kvm-51 release
       [not found]         ` <200711080118.46304.amit.shah-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-11-07 19:55           ` Haydn Solomon
  0 siblings, 0 replies; 32+ messages in thread
From: Haydn Solomon @ 2007-11-07 19:55 UTC (permalink / raw)
  To: Amit Shah; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, Avi Kivity


[-- Attachment #1.1: Type: text/plain, Size: 2040 bytes --]

On Nov 7, 2007 2:48 PM, Amit Shah <amit.shah-atKUWr5tajBWk0Htik3J/w@public.gmane.org> wrote:

> On Thursday 08 November 2007 01:05:32 Haydn Solomon wrote:
> > First , thank you for new release of kvm. I have a few problems to
> > report with kvm-51.
> >
> > 1. When running an exisiting winxp ACPI multiprocessor HAL with -smp 2,
> > sometimes it will hang on boot.
>
> You mean the guest hangs, right? What's your host system?


Yes, sorry, I meant my guest hanges. Host details are:

Linux localhost.localdomain 2.6.23.1-21.fc7 #1 SMP Thu Nov 1 20:28:15 EDT
2007 x86_64 x86_64 x86_64 GNU/Linux

output of /proc/cpu

processor       : 1
vendor_id       : GenuineIntel
cpu family      : 6
model           : 15
model name      : Intel(R) Core(TM)2 Duo CPU     T7500  @ 2.20GHz
stepping        : 10
cpu MHz         : 800.000
cache size      : 4096 KB
physical id     : 0
siblings        : 2
core id         : 1
cpu cores       : 2
fpu             : yes
fpu_exception   : yes
cpuid level     : 10
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm
constant_tsc arch_perfmon pebs bts rep_good pni monitor ds_cpl vmx est tm2
ssse3 cx16 xtpr lahf_lm ida
bogomips        : 4387.71
clflush size    : 64
cache_alignment : 64
address sizes   : 36 bits physical, 48 bits virtual
power management:

Ok, I'm running kvm-50 again and looking at my cpu usage and it's about the
same as what I'm seeing on 51. However, I did upgrade my fedora 7 kernel
since running kvm-50 so I think that probably explains the cpu usage part.
And it's not that the cpu usage is high by any means, just that it was
really low on my previous kernel.

Haydn









>
>
> > 2. This may not be a major problem but cpu usage is a litte higher when
> > idle on release 51 than 50. It was very low on 50, the lowest I've seen
> > in a long time.
> > 3. For me personally, the best performing release to date is release 50.
> >
> > Regards
> >
> > Haydn
>

[-- Attachment #1.2: Type: text/html, Size: 3418 bytes --]

[-- Attachment #2: Type: text/plain, Size: 314 bytes --]

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

[-- Attachment #3: Type: text/plain, Size: 186 bytes --]

_______________________________________________
kvm-devel mailing list
kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
https://lists.sourceforge.net/lists/listinfo/kvm-devel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [ANNOUNCE] kvm-51 release
       [not found]     ` <47321384.8060405-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
  2007-11-07 19:48       ` Amit Shah
@ 2007-11-08  5:51       ` Avi Kivity
       [not found]         ` <4732A3F6.8070903-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
  1 sibling, 1 reply; 32+ messages in thread
From: Avi Kivity @ 2007-11-08  5:51 UTC (permalink / raw)
  To: Haydn Solomon; +Cc: kvm-devel

Haydn Solomon wrote:
> First , thank you for new release of kvm. I have a few problems to
> report with kvm-51.
>
> 1. When running an exisiting winxp ACPI multiprocessor HAL with -smp
> 2, sometimes it will hang on boot.

This isn't new.  It isn't reported because few people run smp Windows,
as prior to FlexPriority/tpr-opt it was unbearably slow.

I'll look unto it.

> 2. This may not be a major problem but cpu usage is a litte higher
> when idle on release 51 than 50. It was very low on 50, the lowest
> I've seen in a long time.

It shouldn't have changed.  What do you see?  Can you provide a snapshot
of kvm_stat while Windows is idling (a few minutes after load)?

> 3. For me personally, the best performing release to date is release 50.

kvm-51 shouldn't be all that different; it's mostly AMD stability
improvements and the FlexPriority smp fix.

>
> Regards
>
> Haydn
>
> Avi Kivity wrote:
>> If you're having trouble on AMD systems, please try this out.
>>
>> Changes from kvm-50:
>> - fix some x86 emulator one-byte insns (fixes W2K3 installer again)
>> - fix host hangs with NMI watchdog on AMD
>> - fix guest SMP on AMD
>> - fix dirty page tracking when clearing a guest page (Dor Laor)
>> - more portability work (Hollis Blanchard, Jerone Young)
>> - fix FlexPriority with guest smp (Sheng Yang)
>> - improve rpm specfile (Akio Takebe, me)
>> - fix external module vs portability work (Andrea Arcangeli)
>> - remove elpin bios due to license violation
>> - testsuite shutdown pio port
>> - don't advertise svm on the guest
>> - fix reset with kernel apic (Markus Rechberger)
>>
>>
>> Notes:
>>       If you use the modules bundled with kvm-51, you can use any version
>> of Linux from 2.6.9 upwards.
>>       If you use the modules bundled with Linux 2.6.20, you need to use
>> kvm-12.
>>       If you use the modules bundled with Linux 2.6.21, you need to use
>> kvm-17.
>>       Modules from Linux 2.6.22 and up will work with any kvm version from
>> kvm-22.  Some features may only be available in newer releases.
>>       For best performance, use Linux 2.6.23-rc2 or later as the host.
>>
>> http://kvm.qumranet.com
>>
>>
>> -------------------------------------------------------------------------
>> This SF.net email is sponsored by: Splunk Inc.
>> Still grepping through log files to find problems?  Stop.
>> Now Search log events and configuration files using AJAX and a browser.
>> Download your FREE copy of Splunk now >> http://get.splunk.com/
>> _______________________________________________
>> kvm-devel mailing list
>> kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
>> https://lists.sourceforge.net/lists/listinfo/kvm-devel
>>   
> adfdf


-- 
Do not meddle in the internals of kernels, for they are subtle and quick to panic.


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [ANNOUNCE] kvm-51 release
       [not found]         ` <4732A3F6.8070903-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-11-08 13:05           ` Haydn Solomon
  0 siblings, 0 replies; 32+ messages in thread
From: Haydn Solomon @ 2007-11-08 13:05 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-devel


[-- Attachment #1.1: Type: text/plain, Size: 3381 bytes --]

On Nov 8, 2007 12:51 AM, Avi Kivity <avi-atKUWr5tajBWk0Htik3J/w@public.gmane.org> wrote:

> Haydn Solomon wrote:
> > First , thank you for new release of kvm. I have a few problems to
> > report with kvm-51.
> >
> > 1. When running an exisiting winxp ACPI multiprocessor HAL with -smp
> > 2, sometimes it will hang on boot.
>
> This isn't new.  It isn't reported because few people run smp Windows,
> as prior to FlexPriority/tpr-opt it was unbearably slow.


>
> I'll look unto it.


thanks.


>
>
> > 2. This may not be a major problem but cpu usage is a litte higher
> > when idle on release 51 than 50. It was very low on 50, the lowest
> > I've seen in a long time.
>
> It shouldn't have changed.  What do you see?  Can you provide a snapshot
> of kvm_stat while Windows is idling (a few minutes after load)?


I went back to kvm-50 and tested and realized that load now is about same as
on kvm-51 now. However, since running release 50 I upgraded my kernel
(Fedora 7) and didn't pay attention to load after the upgrade so I'm pretty
sure this cpu usage thing is kernel related.


>
>
> > 3. For me personally, the best performing release to date is release 50.
>
> kvm-51 shouldn't be all that different; it's mostly AMD stability
> improvements and the FlexPriority smp fix.


This opinion was really based on cpu usage but explained above.


>
>
> >
> > Regards
> >
> > Haydn
> >
> > Avi Kivity wrote:
> >> If you're having trouble on AMD systems, please try this out.
> >>
> >> Changes from kvm-50:
> >> - fix some x86 emulator one-byte insns (fixes W2K3 installer again)
> >> - fix host hangs with NMI watchdog on AMD
> >> - fix guest SMP on AMD
> >> - fix dirty page tracking when clearing a guest page (Dor Laor)
> >> - more portability work (Hollis Blanchard, Jerone Young)
> >> - fix FlexPriority with guest smp (Sheng Yang)
> >> - improve rpm specfile (Akio Takebe, me)
> >> - fix external module vs portability work (Andrea Arcangeli)
> >> - remove elpin bios due to license violation
> >> - testsuite shutdown pio port
> >> - don't advertise svm on the guest
> >> - fix reset with kernel apic (Markus Rechberger)
> >>
> >>
> >> Notes:
> >>       If you use the modules bundled with kvm-51, you can use any
> version
> >> of Linux from 2.6.9 upwards.
> >>       If you use the modules bundled with Linux 2.6.20, you need to use
> >> kvm-12.
> >>       If you use the modules bundled with Linux 2.6.21, you need to use
> >> kvm-17.
> >>       Modules from Linux 2.6.22 and up will work with any kvm version
> from
> >> kvm-22.  Some features may only be available in newer releases.
> >>       For best performance, use Linux 2.6.23-rc2 or later as the host.
> >>
> >> http://kvm.qumranet.com
> >>
> >>
> >>
> -------------------------------------------------------------------------
> >> This SF.net email is sponsored by: Splunk Inc.
> >> Still grepping through log files to find problems?  Stop.
> >> Now Search log events and configuration files using AJAX and a browser.
> >> Download your FREE copy of Splunk now >> http://get.splunk.com/
> >> _______________________________________________
> >> kvm-devel mailing list
> >> kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
> >> https://lists.sourceforge.net/lists/listinfo/kvm-devel
> >>
> > adfdf
>
>
> --
> Do not meddle in the internals of kernels, for they are subtle and quick
> to panic.
>
>

[-- Attachment #1.2: Type: text/html, Size: 5234 bytes --]

[-- Attachment #2: Type: text/plain, Size: 314 bytes --]

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

[-- Attachment #3: Type: text/plain, Size: 186 bytes --]

_______________________________________________
kvm-devel mailing list
kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
https://lists.sourceforge.net/lists/listinfo/kvm-devel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [ANNOUNCE] kvm-51 release
       [not found] ` <4731F5B5.1000108-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
  2007-11-07 19:35   ` Haydn Solomon
@ 2007-11-09 10:25   ` Farkas Levente
       [not found]     ` <473435B6.1000503-lWVWdrzSO4GHXe+LvDLADg@public.gmane.org>
  1 sibling, 1 reply; 32+ messages in thread
From: Farkas Levente @ 2007-11-09 10:25 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-devel

Avi Kivity wrote:
> If you're having trouble on AMD systems, please try this out.

this version worse than kvm-50:-(
setup:
- host:
  - Intel(R) Core(TM)2 Quad CPU Q6600  @ 2.40GHz
  - Intel S3000AHV
  - 8GB RAM
  - CentOS-5
  - kernel-2.6.18-8.1.14.el5 x86_64 64bit
- guest-1:
  - CentOS-5
  - kernel-2.6.18-8.1.14.el5 i386 32bit
- guest-2:
  - CentOS-5
  - kernel-2.6.18-8.1.14.el5 x86_64 64bit
- guest-3:
  - Mandrake-9
  - kernel-2.4.19.16mdk-1-1mdk 32bit
- guest-4:
  - Windows XP Professional 32bit
smp not working on any centos guest (guests are hang during boot). even
the host crash. the worst thing is the host crash during boot with
another stack trace which i was not able to log.
i really would like to see some kind of stable version other then
kvm-36. i see there is a huge ongoing work on ia64, virtio, libkmv and
arch rearrange, but wouldn't it be better to fix these basic issues
first? like running two smp guest (32 and 64) on 64 smp host, just to
boot until the login screen.
this is when the guest stop and the host dump it:
------------------------------------------------------------
Ignoring de-assert INIT to vcpu 1
SIPI to vcpu 1 vector 0x06
SIPI to vcpu 1 vector 0x06
eth0: topology change detected, propagating
eth0: port 3(vnet1) entering forwarding state
Ignoring de-assert INIT to vcpu 2
SIPI to vcpu 2 vector 0x06
SIPI to vcpu 2 vector 0x06
Ignoring de-assert INIT to vcpu 3
SIPI to vcpu 3 vector 0x06
SIPI to vcpu 3 vector 0x06
BUG: soft lockup detected on CPU#1!

Call Trace:
 <IRQ>  [<ffffffff800b2cd7>] softlockup_tick+0xdb/0xed
 [<ffffffff80093493>] update_process_times+0x42/0x68
 [<ffffffff80073e08>] smp_local_timer_interrupt+0x23/0x47
 [<ffffffff800744ca>] smp_apic_timer_interrupt+0x41/0x47
 [<ffffffff8005bd4a>] apic_timer_interrupt+0x66/0x6c
 <EOI>  [<ffffffff88201d8b>] :kvm:kvm_flush_remote_tlbs+0x16e/0x188
 [<ffffffff88201d78>] :kvm:kvm_flush_remote_tlbs+0x15b/0x188
 [<ffffffff8820101b>] :kvm:ack_flush+0x0/0x1
 [<ffffffff882079ac>] :kvm:kvm_mmu_pte_write+0x1fc/0x330
 [<ffffffff88203a36>] :kvm:emulator_write_emulated_onepage+0x85/0xe5
 [<ffffffff8820c320>] :kvm:x86_emulate_insn+0x2e03/0x407f
 [<ffffffff80015e7e>] __pte_alloc+0x122/0x142
 [<ffffffff88225477>] :kvm_intel:vmcs_readl+0x17/0x1c
 [<ffffffff88203e13>] :kvm:emulate_instruction+0x152/0x290
 [<ffffffff8820716b>] :kvm:kvm_mmu_page_fault+0x5e/0xb4
 [<ffffffff882056dc>] :kvm:kvm_arch_vcpu_ioctl_run+0x28a/0x3a6
 [<ffffffff88202539>] :kvm:kvm_vcpu_ioctl+0xc3/0x388
 [<ffffffff8008515c>] __wake_up_common+0x3e/0x68
 [<ffffffff800626d0>] _spin_unlock_irqrestore+0x8/0x9
 [<ffffffff80117410>] avc_has_perm+0x43/0x55
 [<ffffffff80117f47>] inode_has_perm+0x56/0x63
 [<ffffffff8820245d>] :kvm:kvm_vm_ioctl+0x277/0x290
 [<ffffffff88226dcf>] :kvm_intel:vmx_vcpu_put+0x0/0xa3
 [<ffffffff80117fe8>] file_has_perm+0x94/0xa3
 [<ffffffff8003fca8>] do_ioctl+0x21/0x6b
 [<ffffffff8002faae>] vfs_ioctl+0x248/0x261
 [<ffffffff8004a2b4>] sys_ioctl+0x59/0x78
 [<ffffffff8005b349>] tracesys+0xd1/0xdc
------------------------------------------------------------

-- 
  Levente                               "Si vis pacem para bellum!"

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [ANNOUNCE] kvm-51 release
       [not found]     ` <473435B6.1000503-lWVWdrzSO4GHXe+LvDLADg@public.gmane.org>
@ 2007-11-09 14:59       ` david ahern
       [not found]         ` <473475C2.1070908-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
  2007-11-11  9:11       ` Avi Kivity
  1 sibling, 1 reply; 32+ messages in thread
From: david ahern @ 2007-11-09 14:59 UTC (permalink / raw)
  To: Farkas Levente; +Cc: kvm-devel, Avi Kivity

I found that I had to move to a newer kernel (2.6.23.1 is what I used) to get SMP guests to boot on RHEL5 hosts. It appears to be an issue with the host kernel.
 
david


Farkas Levente wrote:
> Avi Kivity wrote:
>> If you're having trouble on AMD systems, please try this out.
> 
> this version worse than kvm-50:-(
> setup:
> - host:
>   - Intel(R) Core(TM)2 Quad CPU Q6600  @ 2.40GHz
>   - Intel S3000AHV
>   - 8GB RAM
>   - CentOS-5
>   - kernel-2.6.18-8.1.14.el5 x86_64 64bit
> - guest-1:
>   - CentOS-5
>   - kernel-2.6.18-8.1.14.el5 i386 32bit
> - guest-2:
>   - CentOS-5
>   - kernel-2.6.18-8.1.14.el5 x86_64 64bit
> - guest-3:
>   - Mandrake-9
>   - kernel-2.4.19.16mdk-1-1mdk 32bit
> - guest-4:
>   - Windows XP Professional 32bit
> smp not working on any centos guest (guests are hang during boot). even
> the host crash. the worst thing is the host crash during boot with
> another stack trace which i was not able to log.
> i really would like to see some kind of stable version other then
> kvm-36. i see there is a huge ongoing work on ia64, virtio, libkmv and
> arch rearrange, but wouldn't it be better to fix these basic issues
> first? like running two smp guest (32 and 64) on 64 smp host, just to
> boot until the login screen.
> this is when the guest stop and the host dump it:
> ------------------------------------------------------------
> Ignoring de-assert INIT to vcpu 1
> SIPI to vcpu 1 vector 0x06
> SIPI to vcpu 1 vector 0x06
> eth0: topology change detected, propagating
> eth0: port 3(vnet1) entering forwarding state
> Ignoring de-assert INIT to vcpu 2
> SIPI to vcpu 2 vector 0x06
> SIPI to vcpu 2 vector 0x06
> Ignoring de-assert INIT to vcpu 3
> SIPI to vcpu 3 vector 0x06
> SIPI to vcpu 3 vector 0x06
> BUG: soft lockup detected on CPU#1!
> 
> Call Trace:
>  <IRQ>  [<ffffffff800b2cd7>] softlockup_tick+0xdb/0xed
>  [<ffffffff80093493>] update_process_times+0x42/0x68
>  [<ffffffff80073e08>] smp_local_timer_interrupt+0x23/0x47
>  [<ffffffff800744ca>] smp_apic_timer_interrupt+0x41/0x47
>  [<ffffffff8005bd4a>] apic_timer_interrupt+0x66/0x6c
>  <EOI>  [<ffffffff88201d8b>] :kvm:kvm_flush_remote_tlbs+0x16e/0x188
>  [<ffffffff88201d78>] :kvm:kvm_flush_remote_tlbs+0x15b/0x188
>  [<ffffffff8820101b>] :kvm:ack_flush+0x0/0x1
>  [<ffffffff882079ac>] :kvm:kvm_mmu_pte_write+0x1fc/0x330
>  [<ffffffff88203a36>] :kvm:emulator_write_emulated_onepage+0x85/0xe5
>  [<ffffffff8820c320>] :kvm:x86_emulate_insn+0x2e03/0x407f
>  [<ffffffff80015e7e>] __pte_alloc+0x122/0x142
>  [<ffffffff88225477>] :kvm_intel:vmcs_readl+0x17/0x1c
>  [<ffffffff88203e13>] :kvm:emulate_instruction+0x152/0x290
>  [<ffffffff8820716b>] :kvm:kvm_mmu_page_fault+0x5e/0xb4
>  [<ffffffff882056dc>] :kvm:kvm_arch_vcpu_ioctl_run+0x28a/0x3a6
>  [<ffffffff88202539>] :kvm:kvm_vcpu_ioctl+0xc3/0x388
>  [<ffffffff8008515c>] __wake_up_common+0x3e/0x68
>  [<ffffffff800626d0>] _spin_unlock_irqrestore+0x8/0x9
>  [<ffffffff80117410>] avc_has_perm+0x43/0x55
>  [<ffffffff80117f47>] inode_has_perm+0x56/0x63
>  [<ffffffff8820245d>] :kvm:kvm_vm_ioctl+0x277/0x290
>  [<ffffffff88226dcf>] :kvm_intel:vmx_vcpu_put+0x0/0xa3
>  [<ffffffff80117fe8>] file_has_perm+0x94/0xa3
>  [<ffffffff8003fca8>] do_ioctl+0x21/0x6b
>  [<ffffffff8002faae>] vfs_ioctl+0x248/0x261
>  [<ffffffff8004a2b4>] sys_ioctl+0x59/0x78
>  [<ffffffff8005b349>] tracesys+0xd1/0xdc
> ------------------------------------------------------------
> 

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [ANNOUNCE] kvm-51 release
       [not found]         ` <473475C2.1070908-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
@ 2007-11-10  0:22           ` Farkas Levente
  2007-11-11  9:08           ` Avi Kivity
  1 sibling, 0 replies; 32+ messages in thread
From: Farkas Levente @ 2007-11-10  0:22 UTC (permalink / raw)
  To: david ahern; +Cc: kvm-devel, Avi Kivity

that would be really sad, since what i like in kvm that i don't have to
compile kernel, so be able to follow upstream kernel updates:-(((

david ahern wrote:
> I found that I had to move to a newer kernel (2.6.23.1 is what I used) to get SMP guests to boot on RHEL5 hosts. It appears to be an issue with the host kernel.
>  
> david
> 
> 
> Farkas Levente wrote:
>> Avi Kivity wrote:
>>> If you're having trouble on AMD systems, please try this out.
>> this version worse than kvm-50:-(
>> setup:
>> - host:
>>   - Intel(R) Core(TM)2 Quad CPU Q6600  @ 2.40GHz
>>   - Intel S3000AHV
>>   - 8GB RAM
>>   - CentOS-5
>>   - kernel-2.6.18-8.1.14.el5 x86_64 64bit
>> - guest-1:
>>   - CentOS-5
>>   - kernel-2.6.18-8.1.14.el5 i386 32bit
>> - guest-2:
>>   - CentOS-5
>>   - kernel-2.6.18-8.1.14.el5 x86_64 64bit
>> - guest-3:
>>   - Mandrake-9
>>   - kernel-2.4.19.16mdk-1-1mdk 32bit
>> - guest-4:
>>   - Windows XP Professional 32bit
>> smp not working on any centos guest (guests are hang during boot). even
>> the host crash. the worst thing is the host crash during boot with
>> another stack trace which i was not able to log.
>> i really would like to see some kind of stable version other then
>> kvm-36. i see there is a huge ongoing work on ia64, virtio, libkmv and
>> arch rearrange, but wouldn't it be better to fix these basic issues
>> first? like running two smp guest (32 and 64) on 64 smp host, just to
>> boot until the login screen.
>> this is when the guest stop and the host dump it:
>> ------------------------------------------------------------
>> Ignoring de-assert INIT to vcpu 1
>> SIPI to vcpu 1 vector 0x06
>> SIPI to vcpu 1 vector 0x06
>> eth0: topology change detected, propagating
>> eth0: port 3(vnet1) entering forwarding state
>> Ignoring de-assert INIT to vcpu 2
>> SIPI to vcpu 2 vector 0x06
>> SIPI to vcpu 2 vector 0x06
>> Ignoring de-assert INIT to vcpu 3
>> SIPI to vcpu 3 vector 0x06
>> SIPI to vcpu 3 vector 0x06
>> BUG: soft lockup detected on CPU#1!
>>
>> Call Trace:
>>  <IRQ>  [<ffffffff800b2cd7>] softlockup_tick+0xdb/0xed
>>  [<ffffffff80093493>] update_process_times+0x42/0x68
>>  [<ffffffff80073e08>] smp_local_timer_interrupt+0x23/0x47
>>  [<ffffffff800744ca>] smp_apic_timer_interrupt+0x41/0x47
>>  [<ffffffff8005bd4a>] apic_timer_interrupt+0x66/0x6c
>>  <EOI>  [<ffffffff88201d8b>] :kvm:kvm_flush_remote_tlbs+0x16e/0x188
>>  [<ffffffff88201d78>] :kvm:kvm_flush_remote_tlbs+0x15b/0x188
>>  [<ffffffff8820101b>] :kvm:ack_flush+0x0/0x1
>>  [<ffffffff882079ac>] :kvm:kvm_mmu_pte_write+0x1fc/0x330
>>  [<ffffffff88203a36>] :kvm:emulator_write_emulated_onepage+0x85/0xe5
>>  [<ffffffff8820c320>] :kvm:x86_emulate_insn+0x2e03/0x407f
>>  [<ffffffff80015e7e>] __pte_alloc+0x122/0x142
>>  [<ffffffff88225477>] :kvm_intel:vmcs_readl+0x17/0x1c
>>  [<ffffffff88203e13>] :kvm:emulate_instruction+0x152/0x290
>>  [<ffffffff8820716b>] :kvm:kvm_mmu_page_fault+0x5e/0xb4
>>  [<ffffffff882056dc>] :kvm:kvm_arch_vcpu_ioctl_run+0x28a/0x3a6
>>  [<ffffffff88202539>] :kvm:kvm_vcpu_ioctl+0xc3/0x388
>>  [<ffffffff8008515c>] __wake_up_common+0x3e/0x68
>>  [<ffffffff800626d0>] _spin_unlock_irqrestore+0x8/0x9
>>  [<ffffffff80117410>] avc_has_perm+0x43/0x55
>>  [<ffffffff80117f47>] inode_has_perm+0x56/0x63
>>  [<ffffffff8820245d>] :kvm:kvm_vm_ioctl+0x277/0x290
>>  [<ffffffff88226dcf>] :kvm_intel:vmx_vcpu_put+0x0/0xa3
>>  [<ffffffff80117fe8>] file_has_perm+0x94/0xa3
>>  [<ffffffff8003fca8>] do_ioctl+0x21/0x6b
>>  [<ffffffff8002faae>] vfs_ioctl+0x248/0x261
>>  [<ffffffff8004a2b4>] sys_ioctl+0x59/0x78
>>  [<ffffffff8005b349>] tracesys+0xd1/0xdc
>> ------------------------------------------------------------
>>


-- 
  Levente                               "Si vis pacem para bellum!"

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [ANNOUNCE] kvm-51 release
       [not found]         ` <473475C2.1070908-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
  2007-11-10  0:22           ` Farkas Levente
@ 2007-11-11  9:08           ` Avi Kivity
  1 sibling, 0 replies; 32+ messages in thread
From: Avi Kivity @ 2007-11-11  9:08 UTC (permalink / raw)
  To: david ahern; +Cc: kvm-devel

david ahern wrote:
> I found that I had to move to a newer kernel (2.6.23.1 is what I used) to get SMP guests to boot on RHEL5 hosts. It appears to be an issue with the host kernel.
>   


Might also be a problem with the smp_call_function_single() emulation.

>  
> david
>
>
> Farkas Levente wrote:
>   
>> Avi Kivity wrote:
>>     
>>> If you're having trouble on AMD systems, please try this out.
>>>       
>> this version worse than kvm-50:-(
>> setup:
>> - host:
>>   - Intel(R) Core(TM)2 Quad CPU Q6600  @ 2.40GHz
>>   - Intel S3000AHV
>>   - 8GB RAM
>>   - CentOS-5
>>   - kernel-2.6.18-8.1.14.el5 x86_64 64bit
>> - guest-1:
>>   - CentOS-5
>>   - kernel-2.6.18-8.1.14.el5 i386 32bit
>> - guest-2:
>>   - CentOS-5
>>   - kernel-2.6.18-8.1.14.el5 x86_64 64bit
>> - guest-3:
>>   - Mandrake-9
>>   - kernel-2.4.19.16mdk-1-1mdk 32bit
>> - guest-4:
>>   - Windows XP Professional 32bit
>> smp not working on any centos guest (guests are hang during boot). even
>> the host crash. the worst thing is the host crash during boot with
>> another stack trace which i was not able to log.
>> i really would like to see some kind of stable version other then
>> kvm-36. i see there is a huge ongoing work on ia64, virtio, libkmv and
>> arch rearrange, but wouldn't it be better to fix these basic issues
>> first? like running two smp guest (32 and 64) on 64 smp host, just to
>> boot until the login screen.
>> this is when the guest stop and the host dump it:
>> ------------------------------------------------------------
>> Ignoring de-assert INIT to vcpu 1
>> SIPI to vcpu 1 vector 0x06
>> SIPI to vcpu 1 vector 0x06
>> eth0: topology change detected, propagating
>> eth0: port 3(vnet1) entering forwarding state
>> Ignoring de-assert INIT to vcpu 2
>> SIPI to vcpu 2 vector 0x06
>> SIPI to vcpu 2 vector 0x06
>> Ignoring de-assert INIT to vcpu 3
>> SIPI to vcpu 3 vector 0x06
>> SIPI to vcpu 3 vector 0x06
>> BUG: soft lockup detected on CPU#1!
>>
>> Call Trace:
>>  <IRQ>  [<ffffffff800b2cd7>] softlockup_tick+0xdb/0xed
>>  [<ffffffff80093493>] update_process_times+0x42/0x68
>>  [<ffffffff80073e08>] smp_local_timer_interrupt+0x23/0x47
>>  [<ffffffff800744ca>] smp_apic_timer_interrupt+0x41/0x47
>>  [<ffffffff8005bd4a>] apic_timer_interrupt+0x66/0x6c
>>  <EOI>  [<ffffffff88201d8b>] :kvm:kvm_flush_remote_tlbs+0x16e/0x188
>>  [<ffffffff88201d78>] :kvm:kvm_flush_remote_tlbs+0x15b/0x188
>>  [<ffffffff8820101b>] :kvm:ack_flush+0x0/0x1
>>  [<ffffffff882079ac>] :kvm:kvm_mmu_pte_write+0x1fc/0x330
>>  [<ffffffff88203a36>] :kvm:emulator_write_emulated_onepage+0x85/0xe5
>>  [<ffffffff8820c320>] :kvm:x86_emulate_insn+0x2e03/0x407f
>>  [<ffffffff80015e7e>] __pte_alloc+0x122/0x142
>>  [<ffffffff88225477>] :kvm_intel:vmcs_readl+0x17/0x1c
>>  [<ffffffff88203e13>] :kvm:emulate_instruction+0x152/0x290
>>  [<ffffffff8820716b>] :kvm:kvm_mmu_page_fault+0x5e/0xb4
>>  [<ffffffff882056dc>] :kvm:kvm_arch_vcpu_ioctl_run+0x28a/0x3a6
>>  [<ffffffff88202539>] :kvm:kvm_vcpu_ioctl+0xc3/0x388
>>  [<ffffffff8008515c>] __wake_up_common+0x3e/0x68
>>  [<ffffffff800626d0>] _spin_unlock_irqrestore+0x8/0x9
>>  [<ffffffff80117410>] avc_has_perm+0x43/0x55
>>  [<ffffffff80117f47>] inode_has_perm+0x56/0x63
>>  [<ffffffff8820245d>] :kvm:kvm_vm_ioctl+0x277/0x290
>>  [<ffffffff88226dcf>] :kvm_intel:vmx_vcpu_put+0x0/0xa3
>>  [<ffffffff80117fe8>] file_has_perm+0x94/0xa3
>>  [<ffffffff8003fca8>] do_ioctl+0x21/0x6b
>>  [<ffffffff8002faae>] vfs_ioctl+0x248/0x261
>>  [<ffffffff8004a2b4>] sys_ioctl+0x59/0x78
>>  [<ffffffff8005b349>] tracesys+0xd1/0xdc
>> ------------------------------------------------------------
>>
>>     


-- 
error compiling committee.c: too many arguments to function


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [ANNOUNCE] kvm-51 release
       [not found]     ` <473435B6.1000503-lWVWdrzSO4GHXe+LvDLADg@public.gmane.org>
  2007-11-09 14:59       ` david ahern
@ 2007-11-11  9:11       ` Avi Kivity
       [not found]         ` <4736C752.7060703-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
  1 sibling, 1 reply; 32+ messages in thread
From: Avi Kivity @ 2007-11-11  9:11 UTC (permalink / raw)
  To: Farkas Levente; +Cc: kvm-devel

Farkas Levente wrote:
> Avi Kivity wrote:
>   
>> If you're having trouble on AMD systems, please try this out.
>>     
>
> this version worse than kvm-50:-(
> setup:
> - host:
>   - Intel(R) Core(TM)2 Quad CPU Q6600  @ 2.40GHz
>   - Intel S3000AHV
>   - 8GB RAM
>   - CentOS-5
>   - kernel-2.6.18-8.1.14.el5 x86_64 64bit
> - guest-1:
>   - CentOS-5
>   - kernel-2.6.18-8.1.14.el5 i386 32bit
> - guest-2:
>   - CentOS-5
>   - kernel-2.6.18-8.1.14.el5 x86_64 64bit
> - guest-3:
>   - Mandrake-9
>   - kernel-2.4.19.16mdk-1-1mdk 32bit
> - guest-4:
>   - Windows XP Professional 32bit
> smp not working on any centos guest (guests are hang during boot). even
> the host crash. the worst thing is the host crash during boot with
> another stack trace which i was not able to log.
> i really would like to see some kind of stable version other then
> kvm-36. i see there is a huge ongoing work on ia64, virtio, libkmv and
> arch rearrange, but wouldn't it be better to fix these basic issues
> first? like running two smp guest (32 and 64) on 64 smp host, just to
> boot until the login screen.
> this is when the guest stop and the host dump it:
>   

[...]

> Call Trace:
>  <IRQ>  [<ffffffff800b2cd7>] softlockup_tick+0xdb/0xed
>  [<ffffffff80093493>] update_process_times+0x42/0x68
>  [<ffffffff80073e08>] smp_local_timer_interrupt+0x23/0x47
>  [<ffffffff800744ca>] smp_apic_timer_interrupt+0x41/0x47
>  [<ffffffff8005bd4a>] apic_timer_interrupt+0x66/0x6c
>  <EOI>  [<ffffffff88201d8b>] :kvm:kvm_flush_remote_tlbs+0x16e/0x188
>  [<ffffffff88201d78>] :kvm:kvm_flush_remote_tlbs+0x15b/0x188
>  [<ffffffff8820101b>] :kvm:ack_flush+0x0/0x1
>   


Are you sure this is a regression relative to kvm-50?  Please recheck.

-- 
error compiling committee.c: too many arguments to function


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [ANNOUNCE] kvm-51 release
       [not found]         ` <4736C752.7060703-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-11-11 12:58           ` Farkas Levente
       [not found]             ` <4736FC77.2080804-lWVWdrzSO4GHXe+LvDLADg@public.gmane.org>
  0 siblings, 1 reply; 32+ messages in thread
From: Farkas Levente @ 2007-11-11 12:58 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-devel

Avi Kivity wrote:
> Farkas Levente wrote:
>> Avi Kivity wrote:
>>  
>>> If you're having trouble on AMD systems, please try this out.
>>>     
>>
>> this version worse than kvm-50:-(
>> setup:
>> - host:
>>   - Intel(R) Core(TM)2 Quad CPU Q6600  @ 2.40GHz
>>   - Intel S3000AHV
>>   - 8GB RAM
>>   - CentOS-5
>>   - kernel-2.6.18-8.1.14.el5 x86_64 64bit
>> - guest-1:
>>   - CentOS-5
>>   - kernel-2.6.18-8.1.14.el5 i386 32bit
>> - guest-2:
>>   - CentOS-5
>>   - kernel-2.6.18-8.1.14.el5 x86_64 64bit
>> - guest-3:
>>   - Mandrake-9
>>   - kernel-2.4.19.16mdk-1-1mdk 32bit
>> - guest-4:
>>   - Windows XP Professional 32bit
>> smp not working on any centos guest (guests are hang during boot). even
>> the host crash. the worst thing is the host crash during boot with
>> another stack trace which i was not able to log.
>> i really would like to see some kind of stable version other then
>> kvm-36. i see there is a huge ongoing work on ia64, virtio, libkmv and
>> arch rearrange, but wouldn't it be better to fix these basic issues
>> first? like running two smp guest (32 and 64) on 64 smp host, just to
>> boot until the login screen.
>> this is when the guest stop and the host dump it:
>>   
> 
> [...]
> 
>> Call Trace:
>>  <IRQ>  [<ffffffff800b2cd7>] softlockup_tick+0xdb/0xed
>>  [<ffffffff80093493>] update_process_times+0x42/0x68
>>  [<ffffffff80073e08>] smp_local_timer_interrupt+0x23/0x47
>>  [<ffffffff800744ca>] smp_apic_timer_interrupt+0x41/0x47
>>  [<ffffffff8005bd4a>] apic_timer_interrupt+0x66/0x6c
>>  <EOI>  [<ffffffff88201d8b>] :kvm:kvm_flush_remote_tlbs+0x16e/0x188
>>  [<ffffffff88201d78>] :kvm:kvm_flush_remote_tlbs+0x15b/0x188
>>  [<ffffffff8820101b>] :kvm:ack_flush+0x0/0x1
>>   
> 
> 
> Are you sure this is a regression relative to kvm-50?  Please recheck.

i', not sure this's a regression since kvm-50 was so terrible slow that
we switch back to kvm-46. but i can't catch any stack trace with kvm-50.
anyway even if it's not a regression it's currently not working with smp.

-- 
  Levente                               "Si vis pacem para bellum!"

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [ANNOUNCE] kvm-51 release
       [not found]             ` <4736FC77.2080804-lWVWdrzSO4GHXe+LvDLADg@public.gmane.org>
@ 2007-11-11 14:43               ` Avi Kivity
       [not found]                 ` <47371510.3020804-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
  0 siblings, 1 reply; 32+ messages in thread
From: Avi Kivity @ 2007-11-11 14:43 UTC (permalink / raw)
  To: Farkas Levente; +Cc: kvm-devel

Farkas Levente wrote:
> Avi Kivity wrote:
>   
>> Farkas Levente wrote:
>>     
>>> Avi Kivity wrote:
>>>  
>>>       
>>>> If you're having trouble on AMD systems, please try this out.
>>>>     
>>>>         
>>> this version worse than kvm-50:-(
>>> setup:
>>> - host:
>>>   - Intel(R) Core(TM)2 Quad CPU Q6600  @ 2.40GHz
>>>   - Intel S3000AHV
>>>   - 8GB RAM
>>>   - CentOS-5
>>>   - kernel-2.6.18-8.1.14.el5 x86_64 64bit
>>> - guest-1:
>>>   - CentOS-5
>>>   - kernel-2.6.18-8.1.14.el5 i386 32bit
>>> - guest-2:
>>>   - CentOS-5
>>>   - kernel-2.6.18-8.1.14.el5 x86_64 64bit
>>> - guest-3:
>>>   - Mandrake-9
>>>   - kernel-2.4.19.16mdk-1-1mdk 32bit
>>> - guest-4:
>>>   - Windows XP Professional 32bit
>>> smp not working on any centos guest (guests are hang during boot). even
>>> the host crash. the worst thing is the host crash during boot with
>>> another stack trace which i was not able to log.
>>> i really would like to see some kind of stable version other then
>>> kvm-36. i see there is a huge ongoing work on ia64, virtio, libkmv and
>>> arch rearrange, but wouldn't it be better to fix these basic issues
>>> first? like running two smp guest (32 and 64) on 64 smp host, just to
>>> boot until the login screen.
>>> this is when the guest stop and the host dump it:
>>>   
>>>       
>> [...]
>>
>>     
>>> Call Trace:
>>>  <IRQ>  [<ffffffff800b2cd7>] softlockup_tick+0xdb/0xed
>>>  [<ffffffff80093493>] update_process_times+0x42/0x68
>>>  [<ffffffff80073e08>] smp_local_timer_interrupt+0x23/0x47
>>>  [<ffffffff800744ca>] smp_apic_timer_interrupt+0x41/0x47
>>>  [<ffffffff8005bd4a>] apic_timer_interrupt+0x66/0x6c
>>>  <EOI>  [<ffffffff88201d8b>] :kvm:kvm_flush_remote_tlbs+0x16e/0x188
>>>  [<ffffffff88201d78>] :kvm:kvm_flush_remote_tlbs+0x15b/0x188
>>>  [<ffffffff8820101b>] :kvm:ack_flush+0x0/0x1
>>>   
>>>       
>> Are you sure this is a regression relative to kvm-50?  Please recheck.
>>     
>
> i', not sure this's a regression since kvm-50 was so terrible slow that
> we switch back to kvm-46. but i can't catch any stack trace with kvm-50.
> anyway even if it's not a regression it's currently not working with smp.
>
>   

I can't reproduce this on a centos system here running 2.6.18-8.el5 with 
a 4-way FC6 x86_64 as guest.  It appears to survive a kernel compile.

What does one need to do in order to reproduce this?

-- 
error compiling committee.c: too many arguments to function


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [ANNOUNCE] kvm-51 release
       [not found]                 ` <47371510.3020804-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-11-11 15:32                   ` david ahern
       [not found]                     ` <47372070.30604-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
  0 siblings, 1 reply; 32+ messages in thread
From: david ahern @ 2007-11-11 15:32 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-devel

I now have hosts running both 32-bit and 64-bit versions of RHEL5.1. I will retry SMP guests on the RHEL5 kernel, but at present kvm-51 does not compile:

make -C kernel
make[1]: Entering directory `/opt/kvm/kvm-51/kernel'
make -C /lib/modules/2.6.18-53.el5/build M=`pwd` "$@"
make[2]: Entering directory `/usr/src/kernels/2.6.18-53.el5-i686'
  LD      /opt/kvm/kvm-51/kernel/built-in.o
  CC [M]  /opt/kvm/kvm-51/kernel/svm.o
  CC [M]  /opt/kvm/kvm-51/kernel/vmx.o
  CC [M]  /opt/kvm/kvm-51/kernel/vmx-debug.o
  CC [M]  /opt/kvm/kvm-51/kernel/kvm_main.o
/opt/kvm/kvm-51/kernel/kvm_main.c: In function ‘kvm_cpu_hotplug’:
/opt/kvm/kvm-51/kernel/kvm_main.c:1348: error: ‘CPU_UP_CANCELED_FROZEN’ undeclared (first use in this function)
/opt/kvm/kvm-51/kernel/kvm_main.c:1348: error: (Each undeclared identifier is reported only once
/opt/kvm/kvm-51/kernel/kvm_main.c:1348: error: for each function it appears in.)
make[3]: *** [/opt/kvm/kvm-51/kernel/kvm_main.o] Error 1
make[2]: *** [_module_/opt/kvm/kvm-51/kernel] Error 2
make[2]: Leaving directory `/usr/src/kernels/2.6.18-53.el5-i686'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/opt/kvm/kvm-51/kernel'
make: *** [kernel] Error 2

david


Avi Kivity wrote:
> Farkas Levente wrote:
>> Avi Kivity wrote:
>>   
>>> Farkas Levente wrote:
>>>     
>>>> Avi Kivity wrote:
>>>>  
>>>>       
>>>>> If you're having trouble on AMD systems, please try this out.
>>>>>     
>>>>>         
>>>> this version worse than kvm-50:-(
>>>> setup:
>>>> - host:
>>>>   - Intel(R) Core(TM)2 Quad CPU Q6600  @ 2.40GHz
>>>>   - Intel S3000AHV
>>>>   - 8GB RAM
>>>>   - CentOS-5
>>>>   - kernel-2.6.18-8.1.14.el5 x86_64 64bit
>>>> - guest-1:
>>>>   - CentOS-5
>>>>   - kernel-2.6.18-8.1.14.el5 i386 32bit
>>>> - guest-2:
>>>>   - CentOS-5
>>>>   - kernel-2.6.18-8.1.14.el5 x86_64 64bit
>>>> - guest-3:
>>>>   - Mandrake-9
>>>>   - kernel-2.4.19.16mdk-1-1mdk 32bit
>>>> - guest-4:
>>>>   - Windows XP Professional 32bit
>>>> smp not working on any centos guest (guests are hang during boot). even
>>>> the host crash. the worst thing is the host crash during boot with
>>>> another stack trace which i was not able to log.
>>>> i really would like to see some kind of stable version other then
>>>> kvm-36. i see there is a huge ongoing work on ia64, virtio, libkmv and
>>>> arch rearrange, but wouldn't it be better to fix these basic issues
>>>> first? like running two smp guest (32 and 64) on 64 smp host, just to
>>>> boot until the login screen.
>>>> this is when the guest stop and the host dump it:
>>>>   
>>>>       
>>> [...]
>>>
>>>     
>>>> Call Trace:
>>>>  <IRQ>  [<ffffffff800b2cd7>] softlockup_tick+0xdb/0xed
>>>>  [<ffffffff80093493>] update_process_times+0x42/0x68
>>>>  [<ffffffff80073e08>] smp_local_timer_interrupt+0x23/0x47
>>>>  [<ffffffff800744ca>] smp_apic_timer_interrupt+0x41/0x47
>>>>  [<ffffffff8005bd4a>] apic_timer_interrupt+0x66/0x6c
>>>>  <EOI>  [<ffffffff88201d8b>] :kvm:kvm_flush_remote_tlbs+0x16e/0x188
>>>>  [<ffffffff88201d78>] :kvm:kvm_flush_remote_tlbs+0x15b/0x188
>>>>  [<ffffffff8820101b>] :kvm:ack_flush+0x0/0x1
>>>>   
>>>>       
>>> Are you sure this is a regression relative to kvm-50?  Please recheck.
>>>     
>> i', not sure this's a regression since kvm-50 was so terrible slow that
>> we switch back to kvm-46. but i can't catch any stack trace with kvm-50.
>> anyway even if it's not a regression it's currently not working with smp.
>>
>>   
> 
> I can't reproduce this on a centos system here running 2.6.18-8.el5 with 
> a 4-way FC6 x86_64 as guest.  It appears to survive a kernel compile.
> 
> What does one need to do in order to reproduce this?
> 

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [ANNOUNCE] kvm-51 release
       [not found]                     ` <47372070.30604-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
@ 2007-11-11 15:55                       ` david ahern
       [not found]                         ` <47372600.9080009-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
  0 siblings, 1 reply; 32+ messages in thread
From: david ahern @ 2007-11-11 15:55 UTC (permalink / raw)
  Cc: kvm-devel, Avi Kivity

In RHEL 5.1 <linux/notifier.h> defines:

#define CPU_TASKS_FROZEN    0x0010

#define CPU_ONLINE_FROZEN   (CPU_ONLINE | CPU_TASKS_FROZEN)
#define CPU_DEAD_FROZEN     (CPU_DEAD | CPU_TASKS_FROZEN)

which means in kvm-51/kernel/external-module-compat.h the '#ifndef CPU_TASKS_FROZEN' needs to have a case. For my purposes, I just moved up the endif around what was defined.

With that change, kvm-51 compiles. I am still seeing 32-bit SMP guests hang on boot for both 32-bit and 64-bit hosts (again running RHEL5.1). 

david



david ahern wrote:
> I now have hosts running both 32-bit and 64-bit versions of RHEL5.1. I will retry SMP guests on the RHEL5 kernel, but at present kvm-51 does not compile:
> 
> make -C kernel
> make[1]: Entering directory `/opt/kvm/kvm-51/kernel'
> make -C /lib/modules/2.6.18-53.el5/build M=`pwd` "$@"
> make[2]: Entering directory `/usr/src/kernels/2.6.18-53.el5-i686'
>   LD      /opt/kvm/kvm-51/kernel/built-in.o
>   CC [M]  /opt/kvm/kvm-51/kernel/svm.o
>   CC [M]  /opt/kvm/kvm-51/kernel/vmx.o
>   CC [M]  /opt/kvm/kvm-51/kernel/vmx-debug.o
>   CC [M]  /opt/kvm/kvm-51/kernel/kvm_main.o
> /opt/kvm/kvm-51/kernel/kvm_main.c: In function ‘kvm_cpu_hotplug’:
> /opt/kvm/kvm-51/kernel/kvm_main.c:1348: error: ‘CPU_UP_CANCELED_FROZEN’ undeclared (first use in this function)
> /opt/kvm/kvm-51/kernel/kvm_main.c:1348: error: (Each undeclared identifier is reported only once
> /opt/kvm/kvm-51/kernel/kvm_main.c:1348: error: for each function it appears in.)
> make[3]: *** [/opt/kvm/kvm-51/kernel/kvm_main.o] Error 1
> make[2]: *** [_module_/opt/kvm/kvm-51/kernel] Error 2
> make[2]: Leaving directory `/usr/src/kernels/2.6.18-53.el5-i686'
> make[1]: *** [all] Error 2
> make[1]: Leaving directory `/opt/kvm/kvm-51/kernel'
> make: *** [kernel] Error 2
> 
> david
> 
> 
> Avi Kivity wrote:
>> Farkas Levente wrote:
>>> Avi Kivity wrote:
>>>   
>>>> Farkas Levente wrote:
>>>>     
>>>>> Avi Kivity wrote:
>>>>>  
>>>>>       
>>>>>> If you're having trouble on AMD systems, please try this out.
>>>>>>     
>>>>>>         
>>>>> this version worse than kvm-50:-(
>>>>> setup:
>>>>> - host:
>>>>>   - Intel(R) Core(TM)2 Quad CPU Q6600  @ 2.40GHz
>>>>>   - Intel S3000AHV
>>>>>   - 8GB RAM
>>>>>   - CentOS-5
>>>>>   - kernel-2.6.18-8.1.14.el5 x86_64 64bit
>>>>> - guest-1:
>>>>>   - CentOS-5
>>>>>   - kernel-2.6.18-8.1.14.el5 i386 32bit
>>>>> - guest-2:
>>>>>   - CentOS-5
>>>>>   - kernel-2.6.18-8.1.14.el5 x86_64 64bit
>>>>> - guest-3:
>>>>>   - Mandrake-9
>>>>>   - kernel-2.4.19.16mdk-1-1mdk 32bit
>>>>> - guest-4:
>>>>>   - Windows XP Professional 32bit
>>>>> smp not working on any centos guest (guests are hang during boot). even
>>>>> the host crash. the worst thing is the host crash during boot with
>>>>> another stack trace which i was not able to log.
>>>>> i really would like to see some kind of stable version other then
>>>>> kvm-36. i see there is a huge ongoing work on ia64, virtio, libkmv and
>>>>> arch rearrange, but wouldn't it be better to fix these basic issues
>>>>> first? like running two smp guest (32 and 64) on 64 smp host, just to
>>>>> boot until the login screen.
>>>>> this is when the guest stop and the host dump it:
>>>>>   
>>>>>       
>>>> [...]
>>>>
>>>>     
>>>>> Call Trace:
>>>>>  <IRQ>  [<ffffffff800b2cd7>] softlockup_tick+0xdb/0xed
>>>>>  [<ffffffff80093493>] update_process_times+0x42/0x68
>>>>>  [<ffffffff80073e08>] smp_local_timer_interrupt+0x23/0x47
>>>>>  [<ffffffff800744ca>] smp_apic_timer_interrupt+0x41/0x47
>>>>>  [<ffffffff8005bd4a>] apic_timer_interrupt+0x66/0x6c
>>>>>  <EOI>  [<ffffffff88201d8b>] :kvm:kvm_flush_remote_tlbs+0x16e/0x188
>>>>>  [<ffffffff88201d78>] :kvm:kvm_flush_remote_tlbs+0x15b/0x188
>>>>>  [<ffffffff8820101b>] :kvm:ack_flush+0x0/0x1
>>>>>   
>>>>>       
>>>> Are you sure this is a regression relative to kvm-50?  Please recheck.
>>>>     
>>> i', not sure this's a regression since kvm-50 was so terrible slow that
>>> we switch back to kvm-46. but i can't catch any stack trace with kvm-50.
>>> anyway even if it's not a regression it's currently not working with smp.
>>>
>>>   
>> I can't reproduce this on a centos system here running 2.6.18-8.el5 with 
>> a 4-way FC6 x86_64 as guest.  It appears to survive a kernel compile.
>>
>> What does one need to do in order to reproduce this?
>>
> 

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [ANNOUNCE] kvm-51 release
       [not found]                         ` <47372600.9080009-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
@ 2007-11-11 16:53                           ` Avi Kivity
       [not found]                             ` <47373380.8040809-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
  0 siblings, 1 reply; 32+ messages in thread
From: Avi Kivity @ 2007-11-11 16:53 UTC (permalink / raw)
  To: david ahern; +Cc: kvm-devel

[-- Attachment #1: Type: text/plain, Size: 758 bytes --]

david ahern wrote:
> In RHEL 5.1 <linux/notifier.h> defines:
>
> #define CPU_TASKS_FROZEN    0x0010
>
> #define CPU_ONLINE_FROZEN   (CPU_ONLINE | CPU_TASKS_FROZEN)
> #define CPU_DEAD_FROZEN     (CPU_DEAD | CPU_TASKS_FROZEN)
>
> which means in kvm-51/kernel/external-module-compat.h the '#ifndef CPU_TASKS_FROZEN' needs to have a case. For my purposes, I just moved up the endif around what was defined.
>   

I committed a change which renders this unnecessary.  Will be part of 
kvm-52.

> With that change, kvm-51 compiles. I am still seeing 32-bit SMP guests hang on boot for both 32-bit and 64-bit hosts (again running RHEL5.1). 
>
>   

I still don't.  Can you test the attached patch?


-- 
error compiling committee.c: too many arguments to function


[-- Attachment #2: scfs-simplify.patch --]
[-- Type: text/x-patch, Size: 1404 bytes --]

diff --git a/kernel/external-module-compat.h b/kernel/external-module-compat.h
index 2b005e9..29917e4 100644
--- a/kernel/external-module-compat.h
+++ b/kernel/external-module-compat.h
@@ -45,20 +45,25 @@
 #include <linux/spinlock.h>
 #include <linux/smp.h>
 
-static spinlock_t scfs_lock = SPIN_LOCK_UNLOCKED;
-static int scfs_cpu;
-static void (*scfs_func)(void *info);
+struct scfs_thunk_info {
+	int cpu;
+	void (*func)(void *info);
+	void *info;
+};
 
-static void scfs_thunk(void *info)
+static inline void scfs_thunk(void *_thunk)
 {
-	if (raw_smp_processor_id() == scfs_cpu)
-		scfs_func(info);
+	struct scfs_thunk_info *thunk = _thunk;
+
+	if (raw_smp_processor_id() == thunk->cpu)
+		thunk->func(thunk->info);
 }
 
 static inline int smp_call_function_single1(int cpu, void (*func)(void *info),
 					   void *info, int nonatomic, int wait)
 {
 	int r, this_cpu;
+	struct scfs_thunk_info thunk;
 
 	this_cpu = get_cpu();
 	if (cpu == this_cpu) {
@@ -67,11 +72,10 @@ static inline int smp_call_function_single1(int cpu, void (*func)(void *info),
 		func(info);
 		local_irq_enable();
 	} else {
-		spin_lock(&scfs_lock);
-		scfs_cpu = cpu;
-		scfs_func = func;
-		r = smp_call_function(scfs_thunk, info, nonatomic, wait);
-		spin_unlock(&scfs_lock);
+		thunk.cpu = cpu;
+		thunk.func = func;
+		thunk.info = info;
+		r = smp_call_function(scfs_thunk, &thunk, 0, 1);
 	}
 	put_cpu();
 	return r;

[-- Attachment #3: Type: text/plain, Size: 314 bytes --]

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

[-- Attachment #4: Type: text/plain, Size: 186 bytes --]

_______________________________________________
kvm-devel mailing list
kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
https://lists.sourceforge.net/lists/listinfo/kvm-devel

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [ANNOUNCE] kvm-51 release
       [not found]                             ` <47373380.8040809-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-11-11 17:09                               ` Farkas Levente
       [not found]                                 ` <4737373C.3080009-lWVWdrzSO4GHXe+LvDLADg@public.gmane.org>
  2007-11-11 21:10                               ` david ahern
  1 sibling, 1 reply; 32+ messages in thread
From: Farkas Levente @ 2007-11-11 17:09 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-devel, david ahern

Avi Kivity wrote:
> david ahern wrote:
>> In RHEL 5.1 <linux/notifier.h> defines:
>>
>> #define CPU_TASKS_FROZEN    0x0010
>>
>> #define CPU_ONLINE_FROZEN   (CPU_ONLINE | CPU_TASKS_FROZEN)
>> #define CPU_DEAD_FROZEN     (CPU_DEAD | CPU_TASKS_FROZEN)
>>
>> which means in kvm-51/kernel/external-module-compat.h the '#ifndef
>> CPU_TASKS_FROZEN' needs to have a case. For my purposes, I just moved
>> up the endif around what was defined.
>>   
> 
> I committed a change which renders this unnecessary.  Will be part of
> kvm-52.
> 
>> With that change, kvm-51 compiles. I am still seeing 32-bit SMP guests
>> hang on boot for both 32-bit and 64-bit hosts (again running RHEL5.1).
>>   
> 
> I still don't.  Can you test the attached patch?

can you tell us which cpu, memory, host and guest os are you using? may
be we can guess the differences.


ps. i can't test it until tomorrow since i dare not restart it remotely
since most of the time host are crash and if i'm not at there all guest
hand until tomorrow.

-- 
  Levente                               "Si vis pacem para bellum!"

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [ANNOUNCE] kvm-51 release
       [not found]                                 ` <4737373C.3080009-lWVWdrzSO4GHXe+LvDLADg@public.gmane.org>
@ 2007-11-11 17:11                                   ` Avi Kivity
       [not found]                                     ` <473737D9.4020708-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
  0 siblings, 1 reply; 32+ messages in thread
From: Avi Kivity @ 2007-11-11 17:11 UTC (permalink / raw)
  To: Farkas Levente; +Cc: kvm-devel, david ahern

Farkas Levente wrote:
>>
>>> With that change, kvm-51 compiles. I am still seeing 32-bit SMP guests
>>> hang on boot for both 32-bit and 64-bit hosts (again running RHEL5.1).
>>>   
>>>       
>> I still don't.  Can you test the attached patch?
>>     
>
> can you tell us which cpu, memory, host and guest os are you using? may
> be we can guess the differences.
>
>
>   
processor       : 1
vendor_id       : GenuineIntel
cpu family      : 6
model           : 15
model name      : Intel(R) Xeon(R) CPU            5150  @ 2.66GHz
stepping        : 6
cpu MHz         : 1998.000
cache size      : 4096 KB
physical id     : 0
siblings        : 2
core id         : 1
cpu cores       : 2
fpu             : yes
fpu_exception   : yes
cpuid level     : 10
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge 
mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall 
nx lm constant_tsc pni monitor ds_cpl vmx est tm2 cx16 xtpr lahf_lm
bogomips        : 5319.99
clflush size    : 64
cache_alignment : 64
address sizes   : 36 bits physical, 48 bits virtual
power management:


(total 2 cores)

             total       used       free     shared    buffers     cached
Mem:      16437164    7874104    8563060          0     112444    7488552
-/+ buffers/cache:     273108   16164056
Swap:      2031608          0    2031608


Host is clean centos-5 x86_64 w/ kvm.git

Guest is clean FC6 x86_64

-- 
error compiling committee.c: too many arguments to function


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [ANNOUNCE] kvm-51 release
       [not found]                                     ` <473737D9.4020708-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-11-11 17:20                                       ` Farkas Levente
       [not found]                                         ` <473739EA.9070804-lWVWdrzSO4GHXe+LvDLADg@public.gmane.org>
  0 siblings, 1 reply; 32+ messages in thread
From: Farkas Levente @ 2007-11-11 17:20 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-devel, david ahern

Avi Kivity wrote:
> Farkas Levente wrote:
>>>
>>>> With that change, kvm-51 compiles. I am still seeing 32-bit SMP guests
>>>> hang on boot for both 32-bit and 64-bit hosts (again running RHEL5.1).
>>>>         
>>> I still don't.  Can you test the attached patch?
>>>     
>>
>> can you tell us which cpu, memory, host and guest os are you using? may
>> be we can guess the differences.
> Host is clean centos-5 x86_64 w/ kvm.git

same host just different hardware:-(

> Guest is clean FC6 x86_64

can you recheck it with 1 clean centos-5 x86_64 and one centos-5 i386
both have 2GB ram and in your case 2 vcpu and try to start the guest
parallel?

if it's still works for you then the only difference will be the
hardware or that i try 4 vcpu (since our host has 4 cpu).

-- 
  Levente                               "Si vis pacem para bellum!"

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [ANNOUNCE] kvm-51 release
       [not found]                             ` <47373380.8040809-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
  2007-11-11 17:09                               ` Farkas Levente
@ 2007-11-11 21:10                               ` david ahern
       [not found]                                 ` <47376FB3.30303-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
  1 sibling, 1 reply; 32+ messages in thread
From: david ahern @ 2007-11-11 21:10 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-devel

The patch worked for me -- rhel4 smp guests boot fine on stock RHEL5 hosts, both 32-bit and 64-bit.

david


Avi Kivity wrote:
> david ahern wrote:
>> In RHEL 5.1 <linux/notifier.h> defines:
>>
>> #define CPU_TASKS_FROZEN    0x0010
>>
>> #define CPU_ONLINE_FROZEN   (CPU_ONLINE | CPU_TASKS_FROZEN)
>> #define CPU_DEAD_FROZEN     (CPU_DEAD | CPU_TASKS_FROZEN)
>>
>> which means in kvm-51/kernel/external-module-compat.h the '#ifndef
>> CPU_TASKS_FROZEN' needs to have a case. For my purposes, I just moved
>> up the endif around what was defined.
>>   
> 
> I committed a change which renders this unnecessary.  Will be part of
> kvm-52.
> 
>> With that change, kvm-51 compiles. I am still seeing 32-bit SMP guests
>> hang on boot for both 32-bit and 64-bit hosts (again running RHEL5.1).
>>   
> 
> I still don't.  Can you test the attached patch?
> 
> 
> 
> ------------------------------------------------------------------------
> 
> -------------------------------------------------------------------------
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >> http://get.splunk.com/
> 
> 
> ------------------------------------------------------------------------
> 
> _______________________________________________
> kvm-devel mailing list
> kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
> https://lists.sourceforge.net/lists/listinfo/kvm-devel

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [ANNOUNCE] kvm-51 release
       [not found]                                 ` <47376FB3.30303-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
@ 2007-11-12  8:19                                   ` Avi Kivity
       [not found]                                     ` <47380C95.1030502-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
  0 siblings, 1 reply; 32+ messages in thread
From: Avi Kivity @ 2007-11-12  8:19 UTC (permalink / raw)
  To: david ahern; +Cc: kvm-devel

david ahern wrote:
> The patch worked for me -- rhel4 smp guests boot fine on stock RHEL5 hosts, both 32-bit and 64-bit.
>
>   

Excellent.  I had a premonition so it is already committed.

Do note that smp_call_function_mask() emulation is pretty bad in terms 
of performance on large multicores.  On a dual code it's basically 
equivalent to mainline, I guess it's okay for four-way, but above 
four-way you will need either mainline or a better 
smp_call_function_mask() (which is nontrivial but doable).

> david
>
>
> Avi Kivity wrote:
>   
>> david ahern wrote:
>>     
>>> In RHEL 5.1 <linux/notifier.h> defines:
>>>
>>> #define CPU_TASKS_FROZEN    0x0010
>>>
>>> #define CPU_ONLINE_FROZEN   (CPU_ONLINE | CPU_TASKS_FROZEN)
>>> #define CPU_DEAD_FROZEN     (CPU_DEAD | CPU_TASKS_FROZEN)
>>>
>>> which means in kvm-51/kernel/external-module-compat.h the '#ifndef
>>> CPU_TASKS_FROZEN' needs to have a case. For my purposes, I just moved
>>> up the endif around what was defined.
>>>   
>>>       
>> I committed a change which renders this unnecessary.  Will be part of
>> kvm-52.
>>
>>     
>>> With that change, kvm-51 compiles. I am still seeing 32-bit SMP guests
>>> hang on boot for both 32-bit and 64-bit hosts (again running RHEL5.1).
>>>   
>>>       
>> I still don't.  Can you test the attached patch?
>>
>>
>>
>> ------------------------------------------------------------------------
>>
>> -------------------------------------------------------------------------
>> This SF.net email is sponsored by: Splunk Inc.
>> Still grepping through log files to find problems?  Stop.
>> Now Search log events and configuration files using AJAX and a browser.
>> Download your FREE copy of Splunk now >> http://get.splunk.com/
>>
>>
>> ------------------------------------------------------------------------
>>
>> _______________________________________________
>> kvm-devel mailing list
>> kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
>> https://lists.sourceforge.net/lists/listinfo/kvm-devel
>>     


-- 
error compiling committee.c: too many arguments to function


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [ANNOUNCE] kvm-51 release
       [not found]                                         ` <473739EA.9070804-lWVWdrzSO4GHXe+LvDLADg@public.gmane.org>
@ 2007-11-12  8:22                                           ` Avi Kivity
  0 siblings, 0 replies; 32+ messages in thread
From: Avi Kivity @ 2007-11-12  8:22 UTC (permalink / raw)
  To: Farkas Levente; +Cc: kvm-devel, david ahern

Farkas Levente wrote:
>
>> Guest is clean FC6 x86_64
>>     
>
> can you recheck it with 1 clean centos-5 x86_64 and one centos-5 i386
> both have 2GB ram and in your case 2 vcpu and try to start the guest
> parallel?
>
> if it's still works for you then the only difference will be the
> hardware or that i try 4 vcpu (since our host has 4 cpu).
>
>   

Since David Ahern reported success with my patch, I'll wait for you to 
try it or kvm-52.

-- 
error compiling committee.c: too many arguments to function


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [ANNOUNCE] kvm-51 release
       [not found]                                     ` <47380C95.1030502-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-11-12 21:46                                       ` david ahern
       [not found]                                         ` <4738C9B5.6060609-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
  0 siblings, 1 reply; 32+ messages in thread
From: david ahern @ 2007-11-12 21:46 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-devel

With kvm-52 my 32-bit host running RHEL5.1 can start an RHEL 5 SMP guest only once. Second and subsequent attempts hang. Removing kvm and kvm_intel modules have no affect; I need to reboot the host to get an SMP guest to start. My similarly configured 64-bit host does not seem to have this problem.


Second attempts to start the RHEL5 SMP guest hang at:
Starting udev: _


Looking at top on the host shows qemu in a loop:
  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                
 3909 root      18   0 1625m  67m 9476 R  400  2.1   2:52.32 qemu-system-x86                                         


In this case the qemu threads are:

  PID   LWP TTY          TIME CMD
 3909  3909 pts/0    00:01:12 qemu-system-x86
 3909  3911 pts/0    00:01:05 qemu-system-x86
 3909  3912 pts/0    00:01:05 qemu-system-x86
 3909  3913 pts/0    00:01:07 qemu-system-x86
 3909  3917 pts/0    00:00:00 qemu-system-x86



and their kernel side backtraces are:

 process trace for qemu-system-x86(3909)
        f5967d88 00000082 f8c125e4 bbdec465 000001c6 f5230da4 00000001 f7acf000 
        f7d7d000 bbded629 000001c6 000011c4 00000000 f7acf110 c30126e0 00000001 
        f4d8a000 f5967d90 f5967d80 f5230da0 f5967000 f8c11120 f5230da0 f4d8a000 
 Call Trace:
  [<f8c125e4>] vmx_vcpu_put+0xef/0xf6 [kvm_intel]
  [<f8c11120>] handle_external_interrupt+0x0/0xc [kvm_intel]
  [<c042169f>] __cond_resched+0x16/0x34
  [<c0604218>] cond_resched+0x2a/0x31
  [<f8b96d7f>] kvm_arch_vcpu_ioctl_run+0x28d/0x333 [kvm]
  [<f8b94319>] kvm_vcpu_ioctl+0x0/0x366 [kvm]
  [<f8b943d4>] kvm_vcpu_ioctl+0xbb/0x366 [kvm]
  [<c042169f>] __cond_resched+0x16/0x34
  [<c0604218>] cond_resched+0x2a/0x31
  [<c0480305>] core_sys_select+0x1ef/0x2ca
  [<c041ea84>] __wake_up_common+0x2f/0x53
  [<c0604141>] schedule+0x90d/0x9ba
  [<c0405953>] reschedule_interrupt+0x1f/0x24
  [<c042e759>] __dequeue_signal+0x151/0x15c
  [<c042fa99>] dequeue_signal+0x2d/0x9c
  [<c043062c>] sys_rt_sigtimedwait+0xc5/0x2c2
  [<c042cc0e>] getnstimeofday+0x30/0xb6
  [<c04386d6>] ktime_get_ts+0x16/0x44
  [<c04388b6>] ktime_get+0x12/0x34
  [<c04352a6>] common_timer_get+0xee/0x129
  [<f8b94319>] kvm_vcpu_ioctl+0x0/0x366 [kvm]
  [<c047f1e8>] do_ioctl+0x1c/0x5d
  [<c047f473>] vfs_ioctl+0x24a/0x25c
  [<c047f4cd>] sys_ioctl+0x48/0x5f
  [<c0404eff>] syscall_call+0x7/0xb
  =======================
 process trace for qemu-system-x86(3911)
        c301a6e0 00000100 000001c7 f749baa0 00000001 c301a6e0 f749baa0 00000001 
        f51fed44 f51fed44 f51fed6c 00000001 00000001 00000046 f579ce20 f57ee000 
        00000001 c04059bf f579ce20 8005003b 00006c00 f8c113e5 f579ce20 f579ce20 
 Call Trace:
  [<c04059bf>] apic_timer_interrupt+0x1f/0x24
  [<f8c113e5>] vmcs_writel+0x1b/0x2c [kvm_intel]
  [<f8b96cf4>] kvm_arch_vcpu_ioctl_run+0x202/0x333 [kvm]
  [<f8b94319>] kvm_vcpu_ioctl+0x0/0x366 [kvm]
  [<f8b943d4>] kvm_vcpu_ioctl+0xbb/0x366 [kvm]
  [<c041fa31>] enqueue_task+0x29/0x39
  [<c041fa5d>] __activate_task+0x1c/0x29
  [<c04202a7>] try_to_wake_up+0x371/0x37b
  [<c0604141>] schedule+0x90d/0x9ba
  [<c041ea84>] __wake_up_common+0x2f/0x53
  [<c041f871>] __wake_up+0x2a/0x3d
  [<c0438eb7>] wake_futex+0x3a/0x44
  [<c0439187>] futex_wake+0xa9/0xb3
  [<c0439d66>] do_futex+0x20d/0xb15
  [<f8b94696>] kvm_ack_smp_call+0x17/0x27 [kvm]
  [<c042e759>] __dequeue_signal+0x151/0x15c
  [<c042fa99>] dequeue_signal+0x2d/0x9c
  [<f8b93ea9>] kvm_vm_ioctl+0x0/0x277 [kvm]
  [<f8b9410d>] kvm_vm_ioctl+0x264/0x277 [kvm]
  [<c04202b1>] default_wake_function+0x0/0xc
  [<c040599b>] call_function_interrupt+0x1f/0x24
  [<f8b94319>] kvm_vcpu_ioctl+0x0/0x366 [kvm]
  [<c047f1e8>] do_ioctl+0x1c/0x5d
  [<c047f473>] vfs_ioctl+0x24a/0x25c
  [<c047f4cd>] sys_ioctl+0x48/0x5f
  [<c0404eff>] syscall_call+0x7/0xb
  =======================
 process trace for qemu-system-x86(3912)
        f560fd88 00000082 f8c125e4 193272c5 000001c8 f52b6074 00000004 f7f09000 
        f7f09000 19328fc9 000001c8 00001d04 00000002 f52b6070 55eefb90 00000000 
        f52b6070 f5693000 f52b6070 8005003b 00006c00 f8c113e5 f52b6070 f52b6070 
 Call Trace:
  [<f8c125e4>] vmx_vcpu_put+0xef/0xf6 [kvm_intel]
  [<f8c113e5>] vmcs_writel+0x1b/0x2c [kvm_intel]
  [<f8b96cf4>] kvm_arch_vcpu_ioctl_run+0x202/0x333 [kvm]
  [<f8b94319>] kvm_vcpu_ioctl+0x0/0x366 [kvm]
  [<f8b943d4>] kvm_vcpu_ioctl+0xbb/0x366 [kvm]
  [<c0604141>] schedule+0x90d/0x9ba
  [<c041ea84>] __wake_up_common+0x2f/0x53
  [<c0461e10>] find_extend_vma+0x12/0x49
  [<c0438d53>] get_futex_key+0x40/0xd0
  [<c0439187>] futex_wake+0xa9/0xb3
  [<c0439d66>] do_futex+0x20d/0xb15
  [<f888f9b0>] ext3_ordered_writepage+0x0/0x162 [ext3]
  [<c042e759>] __dequeue_signal+0x151/0x15c
  [<c042fa99>] dequeue_signal+0x2d/0x9c
  [<f8b93ea9>] kvm_vm_ioctl+0x0/0x277 [kvm]
  [<f8b9410d>] kvm_vm_ioctl+0x264/0x277 [kvm]
  [<c04202b1>] default_wake_function+0x0/0xc
  [<f8b94319>] kvm_vcpu_ioctl+0x0/0x366 [kvm]
  [<c047f1e8>] do_ioctl+0x1c/0x5d
  [<c047f473>] vfs_ioctl+0x24a/0x25c
  [<c047f4cd>] sys_ioctl+0x48/0x5f
  [<c0404eff>] syscall_call+0x7/0xb
  =======================
 process trace for qemu-system-x86(3913)
        c302a6e0 00000100 000001c8 f7488550 00000003 c302a6e0 f7488550 00000003 
        f4d92d44 f4d92d44 f4d92d6c 00000001 00000001 00000046 f52b6de0 f5aaa000 
        00000001 c04059bf f52b6de0 8005003b 00006c00 f8c113e5 f52b6de0 f52b6de0 
 Call Trace:
  [<c04059bf>] apic_timer_interrupt+0x1f/0x24
  [<f8c113e5>] vmcs_writel+0x1b/0x2c [kvm_intel]
  [<f8b96cf4>] kvm_arch_vcpu_ioctl_run+0x202/0x333 [kvm]
  [<f8b94319>] kvm_vcpu_ioctl+0x0/0x366 [kvm]
  [<f8b943d4>] kvm_vcpu_ioctl+0xbb/0x366 [kvm]
  [<c0604141>] schedule+0x90d/0x9ba
  [<c041ea84>] __wake_up_common+0x2f/0x53
  [<c0461e10>] find_extend_vma+0x12/0x49
  [<c0438d53>] get_futex_key+0x40/0xd0
  [<c0439187>] futex_wake+0xa9/0xb3
  [<c0439d66>] do_futex+0x20d/0xb15
  [<c040599b>] call_function_interrupt+0x1f/0x24
  [<c042e759>] __dequeue_signal+0x151/0x15c
  [<f8b93ea9>] kvm_vm_ioctl+0x0/0x277 [kvm]
  [<f8b9410d>] kvm_vm_ioctl+0x264/0x277 [kvm]
  [<c04202b1>] default_wake_function+0x0/0xc
  [<f8b94319>] kvm_vcpu_ioctl+0x0/0x366 [kvm]
  [<c047f1e8>] do_ioctl+0x1c/0x5d
  [<c047f473>] vfs_ioctl+0x24a/0x25c
  [<c047f4cd>] sys_ioctl+0x48/0x5f
  [<c0404eff>] syscall_call+0x7/0xb
  =======================

david


Avi Kivity wrote:
> david ahern wrote:
>> The patch worked for me -- rhel4 smp guests boot fine on stock RHEL5
>> hosts, both 32-bit and 64-bit.
>>
>>   
> 
> Excellent.  I had a premonition so it is already committed.
> 
> Do note that smp_call_function_mask() emulation is pretty bad in terms
> of performance on large multicores.  On a dual code it's basically
> equivalent to mainline, I guess it's okay for four-way, but above
> four-way you will need either mainline or a better
> smp_call_function_mask() (which is nontrivial but doable).
> 
>> david
>>
>>
>> Avi Kivity wrote:
>>  
>>> david ahern wrote:
>>>    
>>>> In RHEL 5.1 <linux/notifier.h> defines:
>>>>
>>>> #define CPU_TASKS_FROZEN    0x0010
>>>>
>>>> #define CPU_ONLINE_FROZEN   (CPU_ONLINE | CPU_TASKS_FROZEN)
>>>> #define CPU_DEAD_FROZEN     (CPU_DEAD | CPU_TASKS_FROZEN)
>>>>
>>>> which means in kvm-51/kernel/external-module-compat.h the '#ifndef
>>>> CPU_TASKS_FROZEN' needs to have a case. For my purposes, I just moved
>>>> up the endif around what was defined.
>>>>         
>>> I committed a change which renders this unnecessary.  Will be part of
>>> kvm-52.
>>>
>>>    
>>>> With that change, kvm-51 compiles. I am still seeing 32-bit SMP guests
>>>> hang on boot for both 32-bit and 64-bit hosts (again running RHEL5.1).
>>>>         
>>> I still don't.  Can you test the attached patch?
>>>
>>>
>>>
>>> ------------------------------------------------------------------------
>>>
>>> -------------------------------------------------------------------------
>>>
>>> This SF.net email is sponsored by: Splunk Inc.
>>> Still grepping through log files to find problems?  Stop.
>>> Now Search log events and configuration files using AJAX and a browser.
>>> Download your FREE copy of Splunk now >> http://get.splunk.com/
>>>
>>>
>>> ------------------------------------------------------------------------
>>>
>>> _______________________________________________
>>> kvm-devel mailing list
>>> kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
>>> https://lists.sourceforge.net/lists/listinfo/kvm-devel
>>>     
> 
> 

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 32+ messages in thread

* RHEL5 smp guests on RHE5.1 hosts hang with kvm-52
       [not found]                                         ` <4738C9B5.6060609-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
@ 2007-11-12 22:37                                           ` david ahern
       [not found]                                             ` <4738D58C.70304-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
  2007-11-13  8:29                                           ` [ANNOUNCE] kvm-51 release Avi Kivity
  1 sibling, 1 reply; 32+ messages in thread
From: david ahern @ 2007-11-12 22:37 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-devel

[-- Attachment #1: Type: text/plain, Size: 9147 bytes --]

(Changed the subject to correspond with email.)

I am having the same problem on the 64-bit host running RHEL5.1 as well, it just takes more reboots. Same symptoms as I mentioned for the 32-bit host. kernel side stack traces for each qemu thread for one of the lockups is attached; the file contains traces for each thread at 3 sample times in case it helps get some insight.

david


david ahern wrote:
> With kvm-52 my 32-bit host running RHEL5.1 can start an RHEL 5 SMP guest only once. Second and subsequent attempts hang. Removing kvm and kvm_intel modules have no affect; I need to reboot the host to get an SMP guest to start. My similarly configured 64-bit host does not seem to have this problem.
> 
> 
> Second attempts to start the RHEL5 SMP guest hang at:
> Starting udev: _
> 
> 
> Looking at top on the host shows qemu in a loop:
>   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                
>  3909 root      18   0 1625m  67m 9476 R  400  2.1   2:52.32 qemu-system-x86                                         
> 
> 
> In this case the qemu threads are:
> 
>   PID   LWP TTY          TIME CMD
>  3909  3909 pts/0    00:01:12 qemu-system-x86
>  3909  3911 pts/0    00:01:05 qemu-system-x86
>  3909  3912 pts/0    00:01:05 qemu-system-x86
>  3909  3913 pts/0    00:01:07 qemu-system-x86
>  3909  3917 pts/0    00:00:00 qemu-system-x86
> 
> 
> 
> and their kernel side backtraces are:
> 
>  process trace for qemu-system-x86(3909)
>         f5967d88 00000082 f8c125e4 bbdec465 000001c6 f5230da4 00000001 f7acf000 
>         f7d7d000 bbded629 000001c6 000011c4 00000000 f7acf110 c30126e0 00000001 
>         f4d8a000 f5967d90 f5967d80 f5230da0 f5967000 f8c11120 f5230da0 f4d8a000 
>  Call Trace:
>   [<f8c125e4>] vmx_vcpu_put+0xef/0xf6 [kvm_intel]
>   [<f8c11120>] handle_external_interrupt+0x0/0xc [kvm_intel]
>   [<c042169f>] __cond_resched+0x16/0x34
>   [<c0604218>] cond_resched+0x2a/0x31
>   [<f8b96d7f>] kvm_arch_vcpu_ioctl_run+0x28d/0x333 [kvm]
>   [<f8b94319>] kvm_vcpu_ioctl+0x0/0x366 [kvm]
>   [<f8b943d4>] kvm_vcpu_ioctl+0xbb/0x366 [kvm]
>   [<c042169f>] __cond_resched+0x16/0x34
>   [<c0604218>] cond_resched+0x2a/0x31
>   [<c0480305>] core_sys_select+0x1ef/0x2ca
>   [<c041ea84>] __wake_up_common+0x2f/0x53
>   [<c0604141>] schedule+0x90d/0x9ba
>   [<c0405953>] reschedule_interrupt+0x1f/0x24
>   [<c042e759>] __dequeue_signal+0x151/0x15c
>   [<c042fa99>] dequeue_signal+0x2d/0x9c
>   [<c043062c>] sys_rt_sigtimedwait+0xc5/0x2c2
>   [<c042cc0e>] getnstimeofday+0x30/0xb6
>   [<c04386d6>] ktime_get_ts+0x16/0x44
>   [<c04388b6>] ktime_get+0x12/0x34
>   [<c04352a6>] common_timer_get+0xee/0x129
>   [<f8b94319>] kvm_vcpu_ioctl+0x0/0x366 [kvm]
>   [<c047f1e8>] do_ioctl+0x1c/0x5d
>   [<c047f473>] vfs_ioctl+0x24a/0x25c
>   [<c047f4cd>] sys_ioctl+0x48/0x5f
>   [<c0404eff>] syscall_call+0x7/0xb
>   =======================
>  process trace for qemu-system-x86(3911)
>         c301a6e0 00000100 000001c7 f749baa0 00000001 c301a6e0 f749baa0 00000001 
>         f51fed44 f51fed44 f51fed6c 00000001 00000001 00000046 f579ce20 f57ee000 
>         00000001 c04059bf f579ce20 8005003b 00006c00 f8c113e5 f579ce20 f579ce20 
>  Call Trace:
>   [<c04059bf>] apic_timer_interrupt+0x1f/0x24
>   [<f8c113e5>] vmcs_writel+0x1b/0x2c [kvm_intel]
>   [<f8b96cf4>] kvm_arch_vcpu_ioctl_run+0x202/0x333 [kvm]
>   [<f8b94319>] kvm_vcpu_ioctl+0x0/0x366 [kvm]
>   [<f8b943d4>] kvm_vcpu_ioctl+0xbb/0x366 [kvm]
>   [<c041fa31>] enqueue_task+0x29/0x39
>   [<c041fa5d>] __activate_task+0x1c/0x29
>   [<c04202a7>] try_to_wake_up+0x371/0x37b
>   [<c0604141>] schedule+0x90d/0x9ba
>   [<c041ea84>] __wake_up_common+0x2f/0x53
>   [<c041f871>] __wake_up+0x2a/0x3d
>   [<c0438eb7>] wake_futex+0x3a/0x44
>   [<c0439187>] futex_wake+0xa9/0xb3
>   [<c0439d66>] do_futex+0x20d/0xb15
>   [<f8b94696>] kvm_ack_smp_call+0x17/0x27 [kvm]
>   [<c042e759>] __dequeue_signal+0x151/0x15c
>   [<c042fa99>] dequeue_signal+0x2d/0x9c
>   [<f8b93ea9>] kvm_vm_ioctl+0x0/0x277 [kvm]
>   [<f8b9410d>] kvm_vm_ioctl+0x264/0x277 [kvm]
>   [<c04202b1>] default_wake_function+0x0/0xc
>   [<c040599b>] call_function_interrupt+0x1f/0x24
>   [<f8b94319>] kvm_vcpu_ioctl+0x0/0x366 [kvm]
>   [<c047f1e8>] do_ioctl+0x1c/0x5d
>   [<c047f473>] vfs_ioctl+0x24a/0x25c
>   [<c047f4cd>] sys_ioctl+0x48/0x5f
>   [<c0404eff>] syscall_call+0x7/0xb
>   =======================
>  process trace for qemu-system-x86(3912)
>         f560fd88 00000082 f8c125e4 193272c5 000001c8 f52b6074 00000004 f7f09000 
>         f7f09000 19328fc9 000001c8 00001d04 00000002 f52b6070 55eefb90 00000000 
>         f52b6070 f5693000 f52b6070 8005003b 00006c00 f8c113e5 f52b6070 f52b6070 
>  Call Trace:
>   [<f8c125e4>] vmx_vcpu_put+0xef/0xf6 [kvm_intel]
>   [<f8c113e5>] vmcs_writel+0x1b/0x2c [kvm_intel]
>   [<f8b96cf4>] kvm_arch_vcpu_ioctl_run+0x202/0x333 [kvm]
>   [<f8b94319>] kvm_vcpu_ioctl+0x0/0x366 [kvm]
>   [<f8b943d4>] kvm_vcpu_ioctl+0xbb/0x366 [kvm]
>   [<c0604141>] schedule+0x90d/0x9ba
>   [<c041ea84>] __wake_up_common+0x2f/0x53
>   [<c0461e10>] find_extend_vma+0x12/0x49
>   [<c0438d53>] get_futex_key+0x40/0xd0
>   [<c0439187>] futex_wake+0xa9/0xb3
>   [<c0439d66>] do_futex+0x20d/0xb15
>   [<f888f9b0>] ext3_ordered_writepage+0x0/0x162 [ext3]
>   [<c042e759>] __dequeue_signal+0x151/0x15c
>   [<c042fa99>] dequeue_signal+0x2d/0x9c
>   [<f8b93ea9>] kvm_vm_ioctl+0x0/0x277 [kvm]
>   [<f8b9410d>] kvm_vm_ioctl+0x264/0x277 [kvm]
>   [<c04202b1>] default_wake_function+0x0/0xc
>   [<f8b94319>] kvm_vcpu_ioctl+0x0/0x366 [kvm]
>   [<c047f1e8>] do_ioctl+0x1c/0x5d
>   [<c047f473>] vfs_ioctl+0x24a/0x25c
>   [<c047f4cd>] sys_ioctl+0x48/0x5f
>   [<c0404eff>] syscall_call+0x7/0xb
>   =======================
>  process trace for qemu-system-x86(3913)
>         c302a6e0 00000100 000001c8 f7488550 00000003 c302a6e0 f7488550 00000003 
>         f4d92d44 f4d92d44 f4d92d6c 00000001 00000001 00000046 f52b6de0 f5aaa000 
>         00000001 c04059bf f52b6de0 8005003b 00006c00 f8c113e5 f52b6de0 f52b6de0 
>  Call Trace:
>   [<c04059bf>] apic_timer_interrupt+0x1f/0x24
>   [<f8c113e5>] vmcs_writel+0x1b/0x2c [kvm_intel]
>   [<f8b96cf4>] kvm_arch_vcpu_ioctl_run+0x202/0x333 [kvm]
>   [<f8b94319>] kvm_vcpu_ioctl+0x0/0x366 [kvm]
>   [<f8b943d4>] kvm_vcpu_ioctl+0xbb/0x366 [kvm]
>   [<c0604141>] schedule+0x90d/0x9ba
>   [<c041ea84>] __wake_up_common+0x2f/0x53
>   [<c0461e10>] find_extend_vma+0x12/0x49
>   [<c0438d53>] get_futex_key+0x40/0xd0
>   [<c0439187>] futex_wake+0xa9/0xb3
>   [<c0439d66>] do_futex+0x20d/0xb15
>   [<c040599b>] call_function_interrupt+0x1f/0x24
>   [<c042e759>] __dequeue_signal+0x151/0x15c
>   [<f8b93ea9>] kvm_vm_ioctl+0x0/0x277 [kvm]
>   [<f8b9410d>] kvm_vm_ioctl+0x264/0x277 [kvm]
>   [<c04202b1>] default_wake_function+0x0/0xc
>   [<f8b94319>] kvm_vcpu_ioctl+0x0/0x366 [kvm]
>   [<c047f1e8>] do_ioctl+0x1c/0x5d
>   [<c047f473>] vfs_ioctl+0x24a/0x25c
>   [<c047f4cd>] sys_ioctl+0x48/0x5f
>   [<c0404eff>] syscall_call+0x7/0xb
>   =======================
> 
> david
> 
> 
> Avi Kivity wrote:
>> david ahern wrote:
>>> The patch worked for me -- rhel4 smp guests boot fine on stock RHEL5
>>> hosts, both 32-bit and 64-bit.
>>>
>>>   
>> Excellent.  I had a premonition so it is already committed.
>>
>> Do note that smp_call_function_mask() emulation is pretty bad in terms
>> of performance on large multicores.  On a dual code it's basically
>> equivalent to mainline, I guess it's okay for four-way, but above
>> four-way you will need either mainline or a better
>> smp_call_function_mask() (which is nontrivial but doable).
>>
>>> david
>>>
>>>
>>> Avi Kivity wrote:
>>>  
>>>> david ahern wrote:
>>>>    
>>>>> In RHEL 5.1 <linux/notifier.h> defines:
>>>>>
>>>>> #define CPU_TASKS_FROZEN    0x0010
>>>>>
>>>>> #define CPU_ONLINE_FROZEN   (CPU_ONLINE | CPU_TASKS_FROZEN)
>>>>> #define CPU_DEAD_FROZEN     (CPU_DEAD | CPU_TASKS_FROZEN)
>>>>>
>>>>> which means in kvm-51/kernel/external-module-compat.h the '#ifndef
>>>>> CPU_TASKS_FROZEN' needs to have a case. For my purposes, I just moved
>>>>> up the endif around what was defined.
>>>>>         
>>>> I committed a change which renders this unnecessary.  Will be part of
>>>> kvm-52.
>>>>
>>>>    
>>>>> With that change, kvm-51 compiles. I am still seeing 32-bit SMP guests
>>>>> hang on boot for both 32-bit and 64-bit hosts (again running RHEL5.1).
>>>>>         
>>>> I still don't.  Can you test the attached patch?
>>>>
>>>>
>>>>
>>>> ------------------------------------------------------------------------
>>>>
>>>> -------------------------------------------------------------------------
>>>>
>>>> This SF.net email is sponsored by: Splunk Inc.
>>>> Still grepping through log files to find problems?  Stop.
>>>> Now Search log events and configuration files using AJAX and a browser.
>>>> Download your FREE copy of Splunk now >> http://get.splunk.com/
>>>>
>>>>
>>>> ------------------------------------------------------------------------
>>>>
>>>> _______________________________________________
>>>> kvm-devel mailing list
>>>> kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
>>>> https://lists.sourceforge.net/lists/listinfo/kvm-devel
>>>>     
>>
> 

[-- Attachment #2: messages --]
[-- Type: text/plain, Size: 27242 bytes --]

Nov 12 15:30:40 bldr-ccm20 syslogd 1.4.1: restart.
Nov 12 15:31:06 bldr-ccm20 kernel: process trace for qemu-system-x86(3853)
Nov 12 15:31:06 bldr-ccm20 kernel:  0000000000000000 0000000000000086 ffff81011c30b1e8 00000000ffffd0b0
Nov 12 15:31:06 bldr-ccm20 kernel:  0000000000000006 ffff81012db4e100 ffff81012f9f8080 0000015310c2502a
Nov 12 15:31:06 bldr-ccm20 kernel:  0000000000001a33 ffff81012db4e2f0 ffff810100000001 ffff81012f9f8080
Nov 12 15:31:06 bldr-ccm20 kernel: Call Trace:
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8848475b>] :kvm_intel:vmcs_writel+0x20/0x35
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff80089a00>] __cond_resched+0x1c/0x44
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8006104f>] cond_resched+0x3b/0x42
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8846463b>] :kvm:kvm_arch_vcpu_ioctl_run+0x23f/0x393
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8846150e>] :kvm:kvm_vcpu_ioctl+0xc3/0x388
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff80021d2f>] __up_read+0x19/0x7f
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff800dd3ac>] core_sys_select+0x234/0x265
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8006b75e>] do_gettimeofday+0x50/0x92
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff80093c20>] __dequeue_signal+0x18b/0x19a
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8006b75e>] do_gettimeofday+0x50/0x92
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff80058baf>] getnstimeofday+0x10/0x28
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8009d3fc>] ktime_get_ts+0x1a/0x4e
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8003fc22>] do_ioctl+0x21/0x6b
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8006b098>] syscall_trace_leave+0x2c/0x87
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8004a242>] sys_ioctl+0x59/0x78
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8005b28d>] tracesys+0xd5/0xe0
Nov 12 15:31:06 bldr-ccm20 kernel: 
Nov 12 15:31:06 bldr-ccm20 kernel: process trace for qemu-system-x86(3855)
Nov 12 15:31:06 bldr-ccm20 kernel:  ffff81010b72dca8 0000000000000086 ffff81011c30d228 00000000ffffd0b0
Nov 12 15:31:06 bldr-ccm20 kernel:  0000000000000001 ffff81012dbd5040 ffff81012c799860 0000015310ba11ff
Nov 12 15:31:06 bldr-ccm20 kernel:  0000000000001502 ffff81012dbd5230 ffff810100000003 ffff81012c799860
Nov 12 15:31:06 bldr-ccm20 kernel: Call Trace:
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8848475b>] :kvm_intel:vmcs_writel+0x20/0x35
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff80089a00>] __cond_resched+0x1c/0x44
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8006104f>] cond_resched+0x3b/0x42
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff884646e2>] :kvm:kvm_arch_vcpu_ioctl_run+0x2e6/0x393
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8846150e>] :kvm:kvm_vcpu_ioctl+0xc3/0x388
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff80021d2f>] __up_read+0x19/0x7f
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8009dcf1>] futex_wake+0xc6/0xd5
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8003d808>] do_futex+0x241/0xbc7
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff80093c20>] __dequeue_signal+0x18b/0x19a
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff88461432>] :kvm:kvm_vm_ioctl+0x277/0x290
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff80094cd3>] dequeue_signal+0x3c/0xbd
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff80060f29>] thread_return+0x0/0xeb
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8009527c>] sys_rt_sigtimedwait+0xcb/0x2b4
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8003fc22>] do_ioctl+0x21/0x6b
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8002fc67>] vfs_ioctl+0x248/0x261
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8004a242>] sys_ioctl+0x59/0x78
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8005b28d>] tracesys+0xd5/0xe0
Nov 12 15:31:06 bldr-ccm20 kernel: 
Nov 12 15:31:06 bldr-ccm20 kernel: process trace for qemu-system-x86(3856)
Nov 12 15:31:06 bldr-ccm20 kernel:  ffff81010b6edca8 0000000000000086 0000000000000000 0000000000000001
Nov 12 15:31:06 bldr-ccm20 kernel:  0000000000000001 ffff81012db040c0 ffff81012fa0a0c0 000001531107de18
Nov 12 15:31:06 bldr-ccm20 kernel:  000000000000ae62 ffff81012db042b0 0000000000000000 ffff81012fa0a0c0
Nov 12 15:31:06 bldr-ccm20 kernel: Call Trace:
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff884843f3>] :kvm_intel:handle_external_interrupt+0x0/0xc
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff884843f3>] :kvm_intel:handle_external_interrupt+0x0/0xc
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8848475b>] :kvm_intel:vmcs_writel+0x20/0x35
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8846463b>] :kvm:kvm_arch_vcpu_ioctl_run+0x23f/0x393
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8846150e>] :kvm:kvm_vcpu_ioctl+0xc3/0x388
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff80021d2f>] __up_read+0x19/0x7f
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8009dcf1>] futex_wake+0xc6/0xd5
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8003d808>] do_futex+0x241/0xbc7
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff80093c20>] __dequeue_signal+0x18b/0x19a
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff80094cd3>] dequeue_signal+0x3c/0xbd
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8009527c>] sys_rt_sigtimedwait+0xcb/0x2b4
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8003fc22>] do_ioctl+0x21/0x6b
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8002fc67>] vfs_ioctl+0x248/0x261
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8004a242>] sys_ioctl+0x59/0x78
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8005b28d>] tracesys+0xd5/0xe0
Nov 12 15:31:06 bldr-ccm20 kernel: 
Nov 12 15:31:06 bldr-ccm20 kernel: process trace for qemu-system-x86(3857)
Nov 12 15:31:06 bldr-ccm20 kernel:  ffff810125e8f480 ffffffff8846dd0a ffff81010b767328 00000000ffffd0b0
Nov 12 15:31:06 bldr-ccm20 kernel:  ffffffff88479c60 ffffffff88466d94 0000000100000001 ffff81010b766240
Nov 12 15:31:06 bldr-ccm20 kernel:  00000000fee000b0 0000000000000004 ffff81010b767328 00000000ffffd0b0
Nov 12 15:31:06 bldr-ccm20 kernel: Call Trace:
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8846dd0a>] :kvm:apic_mmio_write+0x198/0x48b
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff88466d94>] :kvm:paging32_gva_to_gpa+0x21/0x47
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff884629fb>] :kvm:emulator_write_emulated_onepage+0xa6/0xe5
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8846b2da>] :kvm:x86_emulate_insn+0x2e21/0x40db
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8848475b>] :kvm_intel:vmcs_writel+0x20/0x35
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8848475b>] :kvm_intel:vmcs_writel+0x20/0x35
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8846463b>] :kvm:kvm_arch_vcpu_ioctl_run+0x23f/0x393
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8846150e>] :kvm:kvm_vcpu_ioctl+0xc3/0x388
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff80021d2f>] __up_read+0x19/0x7f
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8009dcf1>] futex_wake+0xc6/0xd5
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8003d808>] do_futex+0x241/0xbc7
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff80093c20>] __dequeue_signal+0x18b/0x19a
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff88461432>] :kvm:kvm_vm_ioctl+0x277/0x290
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff80094cd3>] dequeue_signal+0x3c/0xbd
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff80095415>] sys_rt_sigtimedwait+0x264/0x2b4
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8009527c>] sys_rt_sigtimedwait+0xcb/0x2b4
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8003fc22>] do_ioctl+0x21/0x6b
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8002fc67>] vfs_ioctl+0x248/0x261
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8004a242>] sys_ioctl+0x59/0x78
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8005b28d>] tracesys+0xd5/0xe0
Nov 12 15:31:06 bldr-ccm20 kernel: 
Nov 12 15:31:06 bldr-ccm20 kernel: process trace for qemu-system-x86(3859)
Nov 12 15:31:06 bldr-ccm20 kernel:  ffff81012269fd48 0000000000000086 ffff810103efe6a0 ffffffff8000bf9a
Nov 12 15:31:06 bldr-ccm20 kernel:  000000000000000a ffff81012c2160c0 ffff81012db4e100 00000133b7237c71
Nov 12 15:31:06 bldr-ccm20 kernel:  00000000000020c7 ffff81012c2162a8 0000000000000002 0000000000039b64
Nov 12 15:31:06 bldr-ccm20 kernel: Call Trace:
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8000bf9a>] do_generic_mapping_read+0x3b6/0x3f8
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8003d03b>] lock_timer_base+0x1b/0x3c
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8001c40f>] __mod_timer+0xb0/0xbe
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff80061839>] schedule_timeout+0x8a/0xad
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff80092ada>] process_timeout+0x0/0x5
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8003d7a1>] do_futex+0x1da/0xbc7
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff800884ac>] default_wake_function+0x0/0xe
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff80058a29>] group_send_sig_info+0x62/0x6f
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff800947af>] kill_proc_info+0x48/0x60
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8009e935>] sys_futex+0x101/0x123
Nov 12 15:31:06 bldr-ccm20 kernel:  [<ffffffff8005b28d>] tracesys+0xd5/0xe0
Nov 12 15:31:06 bldr-ccm20 kernel: 
Nov 12 15:32:31 bldr-ccm20 kernel: process trace for qemu-system-x86(3853)
Nov 12 15:32:31 bldr-ccm20 kernel:  ffff81011e469978 0000000000000086 0000000000000000 0000000000000000
Nov 12 15:32:31 bldr-ccm20 kernel:  0000000000000007 ffff81012db4e100 ffff810104375100 00000166b812b5f2
Nov 12 15:32:31 bldr-ccm20 kernel:  00000000001e5bf4 ffff81012db4e2e8 0000000000000001 ffff810104375100
Nov 12 15:32:31 bldr-ccm20 kernel: Call Trace:
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8846e2cb>] :kvm:ioapic_deliver+0x151/0x1f1
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8002dfa2>] __wake_up+0x38/0x4f
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff800342a2>] follow_page+0x217/0x2b8
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff800c4dad>] get_user_pages+0x32f/0x3a9
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff88460362>] :kvm:__gfn_to_page+0x76/0xa9
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff80021d2f>] __up_read+0x19/0x7f
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8846087d>] :kvm:gfn_to_page+0x4d/0x56
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff88466d1b>] :kvm:paging32_walk_addr+0x2a3/0x2fb
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8846d1ca>] :kvm:apic_update_ppr+0x1e/0x49
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8846dd0a>] :kvm:apic_mmio_write+0x198/0x48b
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff88466d94>] :kvm:paging32_gva_to_gpa+0x21/0x47
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff80060f29>] thread_return+0x0/0xeb
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8848475b>] :kvm_intel:vmcs_writel+0x20/0x35
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8848475b>] :kvm_intel:vmcs_writel+0x20/0x35
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8846463b>] :kvm:kvm_arch_vcpu_ioctl_run+0x23f/0x393
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8846150e>] :kvm:kvm_vcpu_ioctl+0xc3/0x388
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff80021d2f>] __up_read+0x19/0x7f
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff800dd3ac>] core_sys_select+0x234/0x265
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8006b75e>] do_gettimeofday+0x50/0x92
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff80058baf>] getnstimeofday+0x10/0x28
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8009d243>] enqueue_hrtimer+0x55/0x70
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8003bb73>] hrtimer_start+0xbc/0xce
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8003fc22>] do_ioctl+0x21/0x6b
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8002fc67>] vfs_ioctl+0x248/0x261
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8004a242>] sys_ioctl+0x59/0x78
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8005b28d>] tracesys+0xd5/0xe0
Nov 12 15:32:31 bldr-ccm20 kernel: 
Nov 12 15:32:31 bldr-ccm20 kernel: process trace for qemu-system-x86(3855)
Nov 12 15:32:31 bldr-ccm20 kernel:  ffff81010b72dc38 0000000000000086 0000000000001000 ffffffff80060f29
Nov 12 15:32:31 bldr-ccm20 kernel:  0000000000000001 ffff81012dbd5040 ffff81010439a080 00000166b8e1911c
Nov 12 15:32:31 bldr-ccm20 kernel:  00000000000004ef ffff81012dbd5228 ffff810100000002 00000166b8dfdbd6
Nov 12 15:32:31 bldr-ccm20 kernel: Call Trace:
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff80060f29>] thread_return+0x0/0xeb
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff80061be8>] __mutex_lock_slowpath+0x55/0x90
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff80089a00>] __cond_resched+0x1c/0x44
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff80061c32>] .text.lock.mutex+0xf/0x14
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff88484480>] :kvm_intel:vmcs_readl+0x17/0x1c
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff884660c0>] :kvm:kvm_mmu_page_fault+0x1b/0xb4
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff88464673>] :kvm:kvm_arch_vcpu_ioctl_run+0x277/0x393
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8846150e>] :kvm:kvm_vcpu_ioctl+0xc3/0x388
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff80021d2f>] __up_read+0x19/0x7f
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8009dcf1>] futex_wake+0xc6/0xd5
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8003d808>] do_futex+0x241/0xbc7
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff80093c20>] __dequeue_signal+0x18b/0x19a
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff88461432>] :kvm:kvm_vm_ioctl+0x277/0x290
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff80094cd3>] dequeue_signal+0x3c/0xbd
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff80060f29>] thread_return+0x0/0xeb
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8009527c>] sys_rt_sigtimedwait+0xcb/0x2b4
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8003fc22>] do_ioctl+0x21/0x6b
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8002fc67>] vfs_ioctl+0x248/0x261
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8004a242>] sys_ioctl+0x59/0x78
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8005b28d>] tracesys+0xd5/0xe0
Nov 12 15:32:31 bldr-ccm20 kernel: 
Nov 12 15:32:31 bldr-ccm20 kernel: process trace for qemu-system-x86(3856)
Nov 12 15:32:31 bldr-ccm20 kernel:  ffff81010b6edca8 0000000000000086 0000000000000000 0000000000000001
Nov 12 15:32:31 bldr-ccm20 kernel:  0000000000000001 ffff81012db040c0 ffff81012fa0a0c0 00000166b94a772c
Nov 12 15:32:31 bldr-ccm20 kernel:  0000000000008f94 ffff81012db042b0 0000000000000000 ffff81012fa0a0c0
Nov 12 15:32:31 bldr-ccm20 kernel: Call Trace:
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8848475b>] :kvm_intel:vmcs_writel+0x20/0x35
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8846463b>] :kvm:kvm_arch_vcpu_ioctl_run+0x23f/0x393
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8846150e>] :kvm:kvm_vcpu_ioctl+0xc3/0x388
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff80021d2f>] __up_read+0x19/0x7f
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8009dcf1>] futex_wake+0xc6/0xd5
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8003d808>] do_futex+0x241/0xbc7
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff80093c20>] __dequeue_signal+0x18b/0x19a
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff80094cd3>] dequeue_signal+0x3c/0xbd
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8009527c>] sys_rt_sigtimedwait+0xcb/0x2b4
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8003fc22>] do_ioctl+0x21/0x6b
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8002fc67>] vfs_ioctl+0x248/0x261
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8004a242>] sys_ioctl+0x59/0x78
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8005b28d>] tracesys+0xd5/0xe0
Nov 12 15:32:31 bldr-ccm20 kernel: 
Nov 12 15:32:31 bldr-ccm20 kernel: process trace for qemu-system-x86(3857)
Nov 12 15:32:31 bldr-ccm20 kernel:  ffff81010b571c38 0000000000000086 ffff81010b767328 00000000ffffd0b0
Nov 12 15:32:31 bldr-ccm20 kernel:  0000000000000007 ffff81010ad3c820 ffff81010439a080 00000166b9793501
Nov 12 15:32:31 bldr-ccm20 kernel:  0000000000009420 ffff81010ad3ca08 ffffffff00000002 ffff81000100c400
Nov 12 15:32:31 bldr-ccm20 kernel: Call Trace:
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff80061be8>] __mutex_lock_slowpath+0x55/0x90
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff80061c32>] .text.lock.mutex+0xf/0x14
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff88484480>] :kvm_intel:vmcs_readl+0x17/0x1c
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff884660c0>] :kvm:kvm_mmu_page_fault+0x1b/0xb4
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff88464673>] :kvm:kvm_arch_vcpu_ioctl_run+0x277/0x393
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8846150e>] :kvm:kvm_vcpu_ioctl+0xc3/0x388
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff80021d2f>] __up_read+0x19/0x7f
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8009dcf1>] futex_wake+0xc6/0xd5
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8003d808>] do_futex+0x241/0xbc7
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff80093c20>] __dequeue_signal+0x18b/0x19a
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff88461432>] :kvm:kvm_vm_ioctl+0x277/0x290
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff80094cd3>] dequeue_signal+0x3c/0xbd
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff80095415>] sys_rt_sigtimedwait+0x264/0x2b4
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8009527c>] sys_rt_sigtimedwait+0xcb/0x2b4
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8003fc22>] do_ioctl+0x21/0x6b
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8002fc67>] vfs_ioctl+0x248/0x261
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8004a242>] sys_ioctl+0x59/0x78
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8005b28d>] tracesys+0xd5/0xe0
Nov 12 15:32:31 bldr-ccm20 kernel: 
Nov 12 15:32:31 bldr-ccm20 kernel: process trace for qemu-system-x86(3859)
Nov 12 15:32:31 bldr-ccm20 kernel:  ffff81012269fd48 0000000000000086 ffff810103efe6a0 ffffffff8000bf9a
Nov 12 15:32:31 bldr-ccm20 kernel:  000000000000000a ffff81012c2160c0 ffff81012db4e100 00000133b7237c71
Nov 12 15:32:31 bldr-ccm20 kernel:  00000000000020c7 ffff81012c2162a8 0000000000000002 0000000000039b64
Nov 12 15:32:31 bldr-ccm20 kernel: Call Trace:
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8000bf9a>] do_generic_mapping_read+0x3b6/0x3f8
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8003d03b>] lock_timer_base+0x1b/0x3c
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8001c40f>] __mod_timer+0xb0/0xbe
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff80061839>] schedule_timeout+0x8a/0xad
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff80092ada>] process_timeout+0x0/0x5
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8003d7a1>] do_futex+0x1da/0xbc7
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff800884ac>] default_wake_function+0x0/0xe
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff80058a29>] group_send_sig_info+0x62/0x6f
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff800947af>] kill_proc_info+0x48/0x60
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8009e935>] sys_futex+0x101/0x123
Nov 12 15:32:31 bldr-ccm20 kernel:  [<ffffffff8005b28d>] tracesys+0xd5/0xe0
Nov 12 15:32:31 bldr-ccm20 kernel: 
Nov 12 15:33:44 bldr-ccm20 kernel: process trace for qemu-system-x86(3853)
Nov 12 15:33:44 bldr-ccm20 kernel:  0000000000000000 0000000000000086 ffff81011c30b1e8 00000000ffffd0b0
Nov 12 15:33:44 bldr-ccm20 kernel:  0000000000000007 ffff81012db4e100 ffff81012f9f8080 00000177a9ebb592
Nov 12 15:33:44 bldr-ccm20 kernel:  000000000000719d ffff81012db4e2f0 ffff810100000001 ffff81012f9f8080
Nov 12 15:33:44 bldr-ccm20 kernel: Call Trace:
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8848475b>] :kvm_intel:vmcs_writel+0x20/0x35
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff80089a00>] __cond_resched+0x1c/0x44
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8006104f>] cond_resched+0x3b/0x42
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8846463b>] :kvm:kvm_arch_vcpu_ioctl_run+0x23f/0x393
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8846150e>] :kvm:kvm_vcpu_ioctl+0xc3/0x388
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8005fdf0>] copy_user_generic+0x7c/0x126
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff800dd3ac>] core_sys_select+0x234/0x265
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8006b75e>] do_gettimeofday+0x50/0x92
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff80058baf>] getnstimeofday+0x10/0x28
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff88461432>] :kvm:kvm_vm_ioctl+0x277/0x290
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8009d243>] enqueue_hrtimer+0x55/0x70
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8003bb73>] hrtimer_start+0xbc/0xce
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8003fc22>] do_ioctl+0x21/0x6b
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8002fc67>] vfs_ioctl+0x248/0x261
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8004a242>] sys_ioctl+0x59/0x78
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8005b28d>] tracesys+0xd5/0xe0
Nov 12 15:33:44 bldr-ccm20 kernel: 
Nov 12 15:33:44 bldr-ccm20 kernel: process trace for qemu-system-x86(3855)
Nov 12 15:33:44 bldr-ccm20 kernel:  ffff81010b72dc38 0000000000000001 00000000000003d7 00000000ffffd0b0
Nov 12 15:33:44 bldr-ccm20 kernel:  ffffffff88479c60 ffffffff88466d94 0000000100000001 0000000000000000
Nov 12 15:33:44 bldr-ccm20 kernel:  ffff81012dbd5040 ffff81012f81d8c0 ffff81012e9832c0 ffff810001004400
Nov 12 15:33:44 bldr-ccm20 kernel: Call Trace:
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8846dd0a>] :kvm:apic_mmio_write+0x198/0x48b
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff88466d94>] :kvm:paging32_gva_to_gpa+0x21/0x47
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff80060f29>] thread_return+0x0/0xeb
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8848475b>] :kvm_intel:vmcs_writel+0x20/0x35
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8848475b>] :kvm_intel:vmcs_writel+0x20/0x35
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8846463b>] :kvm:kvm_arch_vcpu_ioctl_run+0x23f/0x393
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8846150e>] :kvm:kvm_vcpu_ioctl+0xc3/0x388
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff80021d2f>] __up_read+0x19/0x7f
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8009dcf1>] futex_wake+0xc6/0xd5
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8003d808>] do_futex+0x241/0xbc7
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff80093c20>] __dequeue_signal+0x18b/0x19a
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff88461432>] :kvm:kvm_vm_ioctl+0x277/0x290
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8003fc22>] do_ioctl+0x21/0x6b
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8002fc67>] vfs_ioctl+0x248/0x261
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8004a242>] sys_ioctl+0x59/0x78
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8005b28d>] tracesys+0xd5/0xe0
Nov 12 15:33:44 bldr-ccm20 kernel: 
Nov 12 15:33:44 bldr-ccm20 kernel: process trace for qemu-system-x86(3856)
Nov 12 15:33:44 bldr-ccm20 kernel:  ffff81010b6edca8 0000000000000086 ffff81011bcf7d10 ffff81000101e4e0
Nov 12 15:33:44 bldr-ccm20 kernel:  0000000000000001 ffff81012db040c0 ffff81012c799860 00000177a9e2f4cb
Nov 12 15:33:44 bldr-ccm20 kernel:  0000000000001433 ffff81012db042b0 0000000000000003 ffff81012c799860
Nov 12 15:33:44 bldr-ccm20 kernel: Call Trace:
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff80089a00>] __cond_resched+0x1c/0x44
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8006104f>] cond_resched+0x3b/0x42
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff884646e2>] :kvm:kvm_arch_vcpu_ioctl_run+0x2e6/0x393
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8846150e>] :kvm:kvm_vcpu_ioctl+0xc3/0x388
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff80021d2f>] __up_read+0x19/0x7f
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8009dcf1>] futex_wake+0xc6/0xd5
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8003d808>] do_futex+0x241/0xbc7
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff80093c20>] __dequeue_signal+0x18b/0x19a
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff80094cd3>] dequeue_signal+0x3c/0xbd
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8009527c>] sys_rt_sigtimedwait+0xcb/0x2b4
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8003fc22>] do_ioctl+0x21/0x6b
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8002fc67>] vfs_ioctl+0x248/0x261
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8004a242>] sys_ioctl+0x59/0x78
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8005b28d>] tracesys+0xd5/0xe0
Nov 12 15:33:44 bldr-ccm20 kernel: 
Nov 12 15:33:44 bldr-ccm20 kernel: process trace for qemu-system-x86(3857)
Nov 12 15:33:44 bldr-ccm20 kernel:  ffff810125e8f480 ffffffff8846dd0a ffff81010b767328 00000000ffffd0b0
Nov 12 15:33:44 bldr-ccm20 kernel:  ffffffff88479c60 ffffffff88466d94 0000000100000001 ffff810001004478
Nov 12 15:33:44 bldr-ccm20 kernel:  ffff81010b571bc0 ffff81012dbd5040 ffffffff80087ccd ffff810001004400
Nov 12 15:33:44 bldr-ccm20 kernel: Call Trace:
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff884676a1>] :kvm:paging32_page_fault+0x215/0x25a
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8848475b>] :kvm_intel:vmcs_writel+0x20/0x35
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8848475b>] :kvm_intel:vmcs_writel+0x20/0x35
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8846463b>] :kvm:kvm_arch_vcpu_ioctl_run+0x23f/0x393
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8846150e>] :kvm:kvm_vcpu_ioctl+0xc3/0x388
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff80021d2f>] __up_read+0x19/0x7f
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8009dcf1>] futex_wake+0xc6/0xd5
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8003d808>] do_futex+0x241/0xbc7
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff80093c20>] __dequeue_signal+0x18b/0x19a
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff88461432>] :kvm:kvm_vm_ioctl+0x277/0x290
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff80094cd3>] dequeue_signal+0x3c/0xbd
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff80095415>] sys_rt_sigtimedwait+0x264/0x2b4
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8009527c>] sys_rt_sigtimedwait+0xcb/0x2b4
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8003fc22>] do_ioctl+0x21/0x6b
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8002fc67>] vfs_ioctl+0x248/0x261
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8004a242>] sys_ioctl+0x59/0x78
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8005b28d>] tracesys+0xd5/0xe0
Nov 12 15:33:44 bldr-ccm20 kernel: 
Nov 12 15:33:44 bldr-ccm20 kernel: process trace for qemu-system-x86(3859)
Nov 12 15:33:44 bldr-ccm20 kernel:  ffff81012269fd48 0000000000000086 ffff810103efe6a0 ffffffff8000bf9a
Nov 12 15:33:44 bldr-ccm20 kernel:  000000000000000a ffff81012c2160c0 ffff81012db4e100 00000133b7237c71
Nov 12 15:33:44 bldr-ccm20 kernel:  00000000000020c7 ffff81012c2162a8 0000000000000002 0000000000039b64
Nov 12 15:33:44 bldr-ccm20 kernel: Call Trace:
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8000bf9a>] do_generic_mapping_read+0x3b6/0x3f8
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8003d03b>] lock_timer_base+0x1b/0x3c
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8001c40f>] __mod_timer+0xb0/0xbe
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff80061839>] schedule_timeout+0x8a/0xad
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff80092ada>] process_timeout+0x0/0x5
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8003d7a1>] do_futex+0x1da/0xbc7
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff800884ac>] default_wake_function+0x0/0xe
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff80058a29>] group_send_sig_info+0x62/0x6f
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff800947af>] kill_proc_info+0x48/0x60
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8009e935>] sys_futex+0x101/0x123
Nov 12 15:33:44 bldr-ccm20 kernel:  [<ffffffff8005b28d>] tracesys+0xd5/0xe0
Nov 12 15:33:44 bldr-ccm20 kernel: 

[-- Attachment #3: Type: text/plain, Size: 314 bytes --]

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

[-- Attachment #4: Type: text/plain, Size: 186 bytes --]

_______________________________________________
kvm-devel mailing list
kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
https://lists.sourceforge.net/lists/listinfo/kvm-devel

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [ANNOUNCE] kvm-51 release
       [not found]                                         ` <4738C9B5.6060609-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
  2007-11-12 22:37                                           ` RHEL5 smp guests on RHE5.1 hosts hang with kvm-52 david ahern
@ 2007-11-13  8:29                                           ` Avi Kivity
       [not found]                                             ` <4739605A.4010309-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
  1 sibling, 1 reply; 32+ messages in thread
From: Avi Kivity @ 2007-11-13  8:29 UTC (permalink / raw)
  To: david ahern; +Cc: kvm-devel

david ahern wrote:
> With kvm-52 my 32-bit host running RHEL5.1 can start an RHEL 5 SMP guest only once. Second and subsequent attempts hang. Removing kvm and kvm_intel modules have no affect; I need to reboot the host to get an SMP guest to start. My similarly configured 64-bit host does not seem to have this problem.
>
>
>   

What happens if you boot a host, wait a while (say a couple of hours),
and then start a guest?

(I'm suspecting unsynchronized tscs)

-- 
Do not meddle in the internals of kernels, for they are subtle and quick to panic.


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: RHEL5 smp guests on RHE5.1 hosts hang with kvm-52
       [not found]                                             ` <4738D58C.70304-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
@ 2007-11-13 10:07                                               ` Farkas Levente
  0 siblings, 0 replies; 32+ messages in thread
From: Farkas Levente @ 2007-11-13 10:07 UTC (permalink / raw)
  To: david ahern; +Cc: kvm-devel, Avi Kivity

david ahern wrote:
> (Changed the subject to correspond with email.)
> 
> I am having the same problem on the 64-bit host running RHEL5.1 as well, it just takes more reboots. Same symptoms as I mentioned for the 32-bit host. kernel side stack traces for each qemu thread for one of the lockups is attached; the file contains traces for each thread at 3 sample times in case it helps get some insight.

it's seems i'm not the only one:-)

-- 
  Levente                               "Si vis pacem para bellum!"

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [ANNOUNCE] kvm-51 release
       [not found]                                             ` <4739605A.4010309-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-11-13 16:12                                               ` david ahern
       [not found]                                                 ` <4739CCED.2060105-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
  0 siblings, 1 reply; 32+ messages in thread
From: david ahern @ 2007-11-13 16:12 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-devel

I let the host stay up for 90 minutes before loading kvm and starting a VM. On the first reboot it hangs at 'Starting udev'.

I added 'noapic' to the kernel boot options, and it boots fine. (Turns out I only added that to grub.conf in images that run a particular ap for which I am running performance tests.)

I would like to know why I need the noapic option to get around this and the networking problem. Are there performance hits as a side effect?

david

Avi Kivity wrote:
> david ahern wrote:
>> With kvm-52 my 32-bit host running RHEL5.1 can start an RHEL 5 SMP guest only once. Second and subsequent attempts hang. Removing kvm and kvm_intel modules have no affect; I need to reboot the host to get an SMP guest to start. My similarly configured 64-bit host does not seem to have this problem.
>>
>>
>>   
> 
> What happens if you boot a host, wait a while (say a couple of hours),
> and then start a guest?
> 
> (I'm suspecting unsynchronized tscs)
> 

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [ANNOUNCE] kvm-51 release
       [not found]                                                 ` <4739CCED.2060105-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
@ 2007-11-13 16:15                                                   ` Avi Kivity
       [not found]                                                     ` <4739CDAD.1030506-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
  0 siblings, 1 reply; 32+ messages in thread
From: Avi Kivity @ 2007-11-13 16:15 UTC (permalink / raw)
  To: david ahern; +Cc: kvm-devel

david ahern wrote:
> I let the host stay up for 90 minutes before loading kvm and starting a VM. On the first reboot it hangs at 'Starting udev'.
>
>   

First reboot or first boot?

I thought the problem was cold starting a VM.

> I added 'noapic' to the kernel boot options, and it boots fine. (Turns out I only added that to grub.conf in images that run a particular ap for which I am running performance tests.)
>
> I would like to know why I need the noapic option to get around this and the networking problem. Are there performance hits as a side effect?
>
>   

Looks like there's a bug in the apic emulation.  There probably are 
performance implications.  Does -no-kvm-irqchip help?


-- 
error compiling committee.c: too many arguments to function


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [ANNOUNCE] kvm-51 release
       [not found]                                                     ` <4739CDAD.1030506-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-11-13 16:31                                                       ` david ahern
       [not found]                                                         ` <4739D167.6020508-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
  0 siblings, 1 reply; 32+ messages in thread
From: david ahern @ 2007-11-13 16:31 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-devel

First boot has been working fine since your patch this past weekend. It's been subsequent boots that hang.

I added -no-kvm-irqchip to qemu command line and did not add the noapic boot option: it's hung at 'Starting udev' again but this time it's not consuming CPU. kernel stack traces for qemu threads:

Nov 13 09:27:51 bldr-ccm89 kernel: process trace for qemu-system-x86(3907)
Nov 13 09:27:51 bldr-ccm89 kernel:        00000001 00000282 c0438eb7 00000000 c07972d4 c0439187 00000001 0817a000
Nov 13 09:27:51 bldr-ccm89 kernel:        f7aed200 000007c4 0817a7c4 0a9a7fd0 0817a7c4 0817a7c4 c0439d66 fff6c373
Nov 13 09:27:51 bldr-ccm89 kernel:        ffffffff 0290e500 f7ae4058 00000001 f5274f18 c042e759 00000000 c30126e0
Nov 13 09:27:51 bldr-ccm89 kernel: Call Trace:
Nov 13 09:27:51 bldr-ccm89 kernel:  [<c0438eb7>] wake_futex+0x3a/0x44
Nov 13 09:27:51 bldr-ccm89 kernel:  [<c0439187>] futex_wake+0xa9/0xb3
Nov 13 09:27:51 bldr-ccm89 kernel:  [<c0439d66>] do_futex+0x20d/0xb15
Nov 13 09:27:51 bldr-ccm89 kernel:  [<c042e759>] __dequeue_signal+0x151/0x15c
Nov 13 09:27:51 bldr-ccm89 kernel:  [<c0604884>] schedule_timeout+0x71/0x8c
Nov 13 09:27:51 bldr-ccm89 kernel:  [<c042d1ab>] process_timeout+0x0/0x5
Nov 13 09:27:51 bldr-ccm89 kernel:  [<c0430747>] sys_rt_sigtimedwait+0x1e0/0x2c2
Nov 13 09:27:51 bldr-ccm89 kernel:  [<c042cc0e>] getnstimeofday+0x30/0xb6
Nov 13 09:27:51 bldr-ccm89 kernel:  [<c04386d6>] ktime_get_ts+0x16/0x44
Nov 13 09:27:51 bldr-ccm89 kernel:  [<c04388b6>] ktime_get+0x12/0x34
Nov 13 09:27:51 bldr-ccm89 kernel:  [<c04352a6>] common_timer_get+0xee/0x129
Nov 13 09:27:51 bldr-ccm89 kernel:  [<c044abd9>] audit_syscall_entry+0x11c/0x14e
Nov 13 09:27:51 bldr-ccm89 kernel:  [<c0404eff>] syscall_call+0x7/0xb
Nov 13 09:27:51 bldr-ccm89 kernel:  =======================
Nov 13 09:27:55 bldr-ccm89 kernel: process trace for qemu-system-x86(3909)
Nov 13 09:27:55 bldr-ccm89 kernel:        f47a6ee4 00000086 c0438eb7 ec1af7ea 00000734 c0439187 00000009 f7c13000
Nov 13 09:27:55 bldr-ccm89 kernel:        c066d3c0 ec1b88ca 00000734 000090e0 00000000 f7c1310c c30126e0 c0673b80
Nov 13 09:27:55 bldr-ccm89 kernel:        00000082 00000046 f7ae4058 ffffffff 00000000 00000000 7fffffff 7fffffff
Nov 13 09:27:55 bldr-ccm89 kernel: Call Trace:
Nov 13 09:27:55 bldr-ccm89 kernel:  [<c0438eb7>] wake_futex+0x3a/0x44
Nov 13 09:27:55 bldr-ccm89 kernel:  [<c0439187>] futex_wake+0xa9/0xb3
Nov 13 09:27:55 bldr-ccm89 kernel:  [<c0604826>] schedule_timeout+0x13/0x8c
Nov 13 09:27:55 bldr-ccm89 kernel:  [<c042fa99>] dequeue_signal+0x2d/0x9c
Nov 13 09:27:55 bldr-ccm89 kernel:  [<c0430747>] sys_rt_sigtimedwait+0x1e0/0x2c2
Nov 13 09:27:55 bldr-ccm89 kernel:  [<c04202b1>] default_wake_function+0x0/0xc
Nov 13 09:27:55 bldr-ccm89 kernel:  [<f8c15319>] kvm_vcpu_ioctl+0x0/0x366 [kvm]
Nov 13 09:27:55 bldr-ccm89 kernel:  [<c044abd9>] audit_syscall_entry+0x11c/0x14e
Nov 13 09:27:55 bldr-ccm89 kernel:  [<c0404eff>] syscall_call+0x7/0xb
Nov 13 09:27:55 bldr-ccm89 kernel:  =======================
Nov 13 09:27:59 bldr-ccm89 kernel: process trace for qemu-system-x86(3910)
Nov 13 09:27:59 bldr-ccm89 kernel:        f4d19ee4 00000086 c0438eb7 04c3e7a7 00000736 c0439187 0000000a f7c0a000
Nov 13 09:27:59 bldr-ccm89 kernel:        f7450550 04c48a79 00000736 0000a2d2 00000002 f7c0a10c c30226e0 c0673b80
Nov 13 09:27:59 bldr-ccm89 kernel:        00000082 00000046 f7ae4058 f4d19f18 f4d19f18 c042e759 7fffffff 7fffffff
Nov 13 09:27:59 bldr-ccm89 kernel: Call Trace:
Nov 13 09:27:59 bldr-ccm89 kernel:  [<c0438eb7>] wake_futex+0x3a/0x44
Nov 13 09:27:59 bldr-ccm89 kernel:  [<c0439187>] futex_wake+0xa9/0xb3
Nov 13 09:27:59 bldr-ccm89 kernel:  [<c042e759>] __dequeue_signal+0x151/0x15c
Nov 13 09:27:59 bldr-ccm89 kernel:  [<c0604826>] schedule_timeout+0x13/0x8c
Nov 13 09:27:59 bldr-ccm89 kernel:  [<c042fa99>] dequeue_signal+0x2d/0x9c
Nov 13 09:27:59 bldr-ccm89 kernel:  [<c0430747>] sys_rt_sigtimedwait+0x1e0/0x2c2
Nov 13 09:27:59 bldr-ccm89 kernel:  [<c04202b1>] default_wake_function+0x0/0xc
Nov 13 09:27:59 bldr-ccm89 kernel:  [<f8c15319>] kvm_vcpu_ioctl+0x0/0x366 [kvm]
Nov 13 09:27:59 bldr-ccm89 kernel:  [<c044abd9>] audit_syscall_entry+0x11c/0x14e
Nov 13 09:27:59 bldr-ccm89 kernel:  [<c0404eff>] syscall_call+0x7/0xb
Nov 13 09:27:59 bldr-ccm89 kernel:  =======================
Nov 13 09:28:02 bldr-ccm89 kernel: process trace for qemu-system-x86(3911)
Nov 13 09:28:02 bldr-ccm89 kernel:        f4370ee4 00000086 c0438eb7 9b91a394 00000736 c0439187 00000009 f7450550
Nov 13 09:28:02 bldr-ccm89 kernel:        c3107000 9b922a3f 00000736 000086ab 00000002 f745065c c30226e0 c0673b80
Nov 13 09:28:02 bldr-ccm89 kernel:        00000082 00000046 f7ae4058 ffffffff 00000000 00000000 7fffffff 7fffffff
Nov 13 09:28:02 bldr-ccm89 kernel: Call Trace:
Nov 13 09:28:02 bldr-ccm89 kernel:  [<c0438eb7>] wake_futex+0x3a/0x44
Nov 13 09:28:02 bldr-ccm89 kernel:  [<c0439187>] futex_wake+0xa9/0xb3
Nov 13 09:28:02 bldr-ccm89 kernel:  [<c0604826>] schedule_timeout+0x13/0x8c
Nov 13 09:28:02 bldr-ccm89 kernel:  [<c042fa99>] dequeue_signal+0x2d/0x9c
Nov 13 09:28:02 bldr-ccm89 kernel:  [<c0430747>] sys_rt_sigtimedwait+0x1e0/0x2c2
Nov 13 09:28:02 bldr-ccm89 kernel:  [<c04202b1>] default_wake_function+0x0/0xc
Nov 13 09:28:02 bldr-ccm89 kernel:  [<c0435bed>] sys_timer_settime+0x243/0x24f
Nov 13 09:28:02 bldr-ccm89 kernel:  [<f8c15319>] kvm_vcpu_ioctl+0x0/0x366 [kvm]
Nov 13 09:28:02 bldr-ccm89 kernel:  [<c044abd9>] audit_syscall_entry+0x11c/0x14e
Nov 13 09:28:02 bldr-ccm89 kernel:  [<c047f473>] vfs_ioctl+0x24a/0x25c
Nov 13 09:28:02 bldr-ccm89 kernel:  [<c047f4cd>] sys_ioctl+0x48/0x5f
Nov 13 09:28:02 bldr-ccm89 kernel:  [<c0404eff>] syscall_call+0x7/0xb
Nov 13 09:28:02 bldr-ccm89 kernel:  =======================
Nov 13 09:28:05 bldr-ccm89 kernel: process trace for qemu-system-x86(3913)
Nov 13 09:28:05 bldr-ccm89 kernel:        f5442e90 00000086 f5aaa4ac f0aa7e8f 00000715 00000019 0000000a f7c1daa0
Nov 13 09:28:05 bldr-ccm89 kernel:        f7c13aa0 f0aaa96f 00000715 00002ae0 00000003 f7c1dbac c302a6e0 c042da86
Nov 13 09:28:05 bldr-ccm89 kernel:        f7d20000 f5442e98 00000286 c042db97 00000000 00000286 80728887 80728887
Nov 13 09:28:05 bldr-ccm89 kernel: Call Trace:
Nov 13 09:28:05 bldr-ccm89 kernel:  [<c042da86>] lock_timer_base+0x15/0x2f
Nov 13 09:28:05 bldr-ccm89 kernel:  [<c042db97>] __mod_timer+0x99/0xa3
Nov 13 09:28:05 bldr-ccm89 kernel:  [<c0604884>] schedule_timeout+0x71/0x8c
Nov 13 09:28:05 bldr-ccm89 kernel:  [<c042d1ab>] process_timeout+0x0/0x5
Nov 13 09:28:05 bldr-ccm89 kernel:  [<c0439cf5>] do_futex+0x19c/0xb15
Nov 13 09:28:05 bldr-ccm89 kernel:  [<c042e937>] send_signal+0x47/0xde
Nov 13 09:28:05 bldr-ccm89 kernel:  [<c042eea4>] __group_send_sig_info+0x74/0x7e
Nov 13 09:28:05 bldr-ccm89 kernel:  [<c04202b1>] default_wake_function+0x0/0xc
Nov 13 09:28:05 bldr-ccm89 kernel:  [<c043a777>] sys_futex+0x109/0x11f
Nov 13 09:28:05 bldr-ccm89 kernel:  [<c0404eff>] syscall_call+0x7/0xb
Nov 13 09:28:05 bldr-ccm89 kernel:  =======================

david

Avi Kivity wrote:
> david ahern wrote:
>> I let the host stay up for 90 minutes before loading kvm and starting a VM. On the first reboot it hangs at 'Starting udev'.
>>
>>   
> 
> First reboot or first boot?
> 
> I thought the problem was cold starting a VM.
> 
>> I added 'noapic' to the kernel boot options, and it boots fine. (Turns out I only added that to grub.conf in images that run a particular ap for which I am running performance tests.)
>>
>> I would like to know why I need the noapic option to get around this and the networking problem. Are there performance hits as a side effect?
>>
>>   
> 
> Looks like there's a bug in the apic emulation.  There probably are 
> performance implications.  Does -no-kvm-irqchip help?
> 
> 

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [ANNOUNCE] kvm-51 release
       [not found]                                                         ` <4739D167.6020508-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
@ 2007-11-13 16:32                                                           ` Avi Kivity
       [not found]                                                             ` <4739D188.5020606-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
  0 siblings, 1 reply; 32+ messages in thread
From: Avi Kivity @ 2007-11-13 16:32 UTC (permalink / raw)
  To: david ahern; +Cc: kvm-devel

david ahern wrote:
> First boot has been working fine since your patch this past weekend. It's been subsequent boots that hang.
>
> I added -no-kvm-irqchip to qemu command line and did not add the noapic boot option: it's hung at 'Starting udev' again but this time it's not consuming CPU. kernel stack traces for qemu threads:
>
>   

Ah okay.  I misunderstood.

How about -no-kvm?  Maybe it's a qemu problem.

-- 
error compiling committee.c: too many arguments to function


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [ANNOUNCE] kvm-51 release
       [not found]                                                             ` <4739D188.5020606-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-11-13 16:54                                                               ` david ahern
       [not found]                                                                 ` <4739D6B5.2040802-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
  0 siblings, 1 reply; 32+ messages in thread
From: david ahern @ 2007-11-13 16:54 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm-devel

I removed the kvm/kvm-intel modules. qemu command line was:

/usr/local/bin/qemu-system-x86_64 -boot c -localtime -hda /opt/kvm/images/rhel5.img -m 1536 -smp 4 -net nic,macaddr=00:0c:29:10:10:e8,model=rtl8139 -net tap,ifname=tap0,script=/bin/true -monitor stdio -no-kvm -name bldr-ccm89.cisco.com -vnc :2

I did *not* add 'noapic' to guest kernel boot. 

The VM boot went fine; the reboot did not. qemu process was showing 100% CPU. After a few minutes I hit ctrl-c, to terminate qemu and then restarted the exact the same command. Same result: boot went fine; shutdown did not, though it hung at a different spot.

If it matters, host for this test is an HP DL380 G5.

david


Avi Kivity wrote:
> david ahern wrote:
>> First boot has been working fine since your patch this past weekend.
>> It's been subsequent boots that hang.
>>
>> I added -no-kvm-irqchip to qemu command line and did not add the
>> noapic boot option: it's hung at 'Starting udev' again but this time
>> it's not consuming CPU. kernel stack traces for qemu threads:
>>
>>   
> 
> Ah okay.  I misunderstood.
> 
> How about -no-kvm?  Maybe it's a qemu problem.
> 

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [ANNOUNCE] kvm-51 release
       [not found]                                                                 ` <4739D6B5.2040802-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
@ 2007-11-13 16:58                                                                   ` Avi Kivity
  0 siblings, 0 replies; 32+ messages in thread
From: Avi Kivity @ 2007-11-13 16:58 UTC (permalink / raw)
  To: david ahern; +Cc: kvm-devel

david ahern wrote:
> I removed the kvm/kvm-intel modules. qemu command line was:
>
> /usr/local/bin/qemu-system-x86_64 -boot c -localtime -hda /opt/kvm/images/rhel5.img -m 1536 -smp 4 -net nic,macaddr=00:0c:29:10:10:e8,model=rtl8139 -net tap,ifname=tap0,script=/bin/true -monitor stdio -no-kvm -name bldr-ccm89.cisco.com -vnc :2
>
> I did *not* add 'noapic' to guest kernel boot. 
>
> The VM boot went fine; the reboot did not. qemu process was showing 100% CPU. After a few minutes I hit ctrl-c, to terminate qemu and then restarted the exact the same command. Same result: boot went fine; shutdown did not, though it hung at a different spot.
>   

Thanks; that helps isolate the problem.  I'll probably be able to 
reproduce it since it's likely not a timing issue.

-- 
error compiling committee.c: too many arguments to function


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2007-11-13 16:58 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-11-07 17:28 [ANNOUNCE] kvm-51 release Avi Kivity
     [not found] ` <4731F5B5.1000108-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-11-07 19:35   ` Haydn Solomon
     [not found]     ` <47321384.8060405-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2007-11-07 19:48       ` Amit Shah
     [not found]         ` <200711080118.46304.amit.shah-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-11-07 19:55           ` Haydn Solomon
2007-11-08  5:51       ` Avi Kivity
     [not found]         ` <4732A3F6.8070903-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-11-08 13:05           ` Haydn Solomon
2007-11-09 10:25   ` Farkas Levente
     [not found]     ` <473435B6.1000503-lWVWdrzSO4GHXe+LvDLADg@public.gmane.org>
2007-11-09 14:59       ` david ahern
     [not found]         ` <473475C2.1070908-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
2007-11-10  0:22           ` Farkas Levente
2007-11-11  9:08           ` Avi Kivity
2007-11-11  9:11       ` Avi Kivity
     [not found]         ` <4736C752.7060703-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-11-11 12:58           ` Farkas Levente
     [not found]             ` <4736FC77.2080804-lWVWdrzSO4GHXe+LvDLADg@public.gmane.org>
2007-11-11 14:43               ` Avi Kivity
     [not found]                 ` <47371510.3020804-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-11-11 15:32                   ` david ahern
     [not found]                     ` <47372070.30604-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
2007-11-11 15:55                       ` david ahern
     [not found]                         ` <47372600.9080009-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
2007-11-11 16:53                           ` Avi Kivity
     [not found]                             ` <47373380.8040809-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-11-11 17:09                               ` Farkas Levente
     [not found]                                 ` <4737373C.3080009-lWVWdrzSO4GHXe+LvDLADg@public.gmane.org>
2007-11-11 17:11                                   ` Avi Kivity
     [not found]                                     ` <473737D9.4020708-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-11-11 17:20                                       ` Farkas Levente
     [not found]                                         ` <473739EA.9070804-lWVWdrzSO4GHXe+LvDLADg@public.gmane.org>
2007-11-12  8:22                                           ` Avi Kivity
2007-11-11 21:10                               ` david ahern
     [not found]                                 ` <47376FB3.30303-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
2007-11-12  8:19                                   ` Avi Kivity
     [not found]                                     ` <47380C95.1030502-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-11-12 21:46                                       ` david ahern
     [not found]                                         ` <4738C9B5.6060609-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
2007-11-12 22:37                                           ` RHEL5 smp guests on RHE5.1 hosts hang with kvm-52 david ahern
     [not found]                                             ` <4738D58C.70304-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
2007-11-13 10:07                                               ` Farkas Levente
2007-11-13  8:29                                           ` [ANNOUNCE] kvm-51 release Avi Kivity
     [not found]                                             ` <4739605A.4010309-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-11-13 16:12                                               ` david ahern
     [not found]                                                 ` <4739CCED.2060105-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
2007-11-13 16:15                                                   ` Avi Kivity
     [not found]                                                     ` <4739CDAD.1030506-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-11-13 16:31                                                       ` david ahern
     [not found]                                                         ` <4739D167.6020508-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
2007-11-13 16:32                                                           ` Avi Kivity
     [not found]                                                             ` <4739D188.5020606-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-11-13 16:54                                                               ` david ahern
     [not found]                                                                 ` <4739D6B5.2040802-FYB4Gu1CFyUAvxtiuMwx3w@public.gmane.org>
2007-11-13 16:58                                                                   ` Avi Kivity

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox