* Biweekly KVM Test report, kernel 7597f... qemu 1c45e...
@ 2009-08-17 1:32 Xu, Jiajun
2009-08-17 12:17 ` Avi Kivity
0 siblings, 1 reply; 9+ messages in thread
From: Xu, Jiajun @ 2009-08-17 1:32 UTC (permalink / raw)
To: 'kvm-devel'
Hi All,
This Weekly KVM Testing Report against lastest kvm.git
7597fa7922136a354824f3360831f13bc98dea4e and qemu-kvm.git
1c45eec341763ed38270a3d1f230044fdfeefb16. There is no new bug found this week. The migration bug is fixed.
One Fixed issue:
================================================
1. Guest will be no response after migration
https://sourceforge.net/tracker/?func=detail&atid=893831&aid=2832401&group_id=180599
Eight Old Issues:
================================================
1. Hot-added device is not visible in guest after migration
https://sourceforge.net/tracker/?func=detail&atid=893831&aid=2832416&group_id=180599
2. ltp diotest running time is 2.54 times than before
https://sourceforge.net/tracker/?func=detail&aid=2723366&group_id=180599&atid=893831
3. 64-bit smp RHEL5.3 call trace after Live Migration
https://sourceforge.net/tracker/?func=detail&aid=2761920&group_id=180599&atid=893831
4. 32bits Rhel5/FC6 guest may fail to reboot after installation
https://sourceforge.net/tracker/?func=detail&atid=893831&aid=1991647&group_id=180599
5. failure to migrate guests with more than 4GB of RAM
https://sourceforge.net/tracker/index.php?func=detail&aid=1971512&group_id=180599&atid=893831
6. OpenSuse10.2 can not be installed
http://sourceforge.net/tracker/index.php?func=detail&aid=2088475&group_id=180599&atid=893831
7. Fail to Save Restore Guest
https://sourceforge.net/tracker/?func=detail&atid=893831&aid=2175042&group_id=180599
8. perfctr wrmsr warning when booting 64bit RHEl5.3
https://sourceforge.net/tracker/?func=detail&aid=2721640&group_id=180599&atid=893831
Test environment
================================================
Platform A
Stoakley/Clovertown
CPU 4
Memory size 8G'
Report Summary on IA32-pae
Summary Test Report of Last Session
=====================================================================
Total Pass Fail NoResult Crash
=====================================================================
control_panel 8 5 3 0 0
gtest 16 16 0 0 0
=====================================================================
control_panel 8 5 3 0 0
:KVM_256M_guest_PAE_gPAE 1 1 0 0 0
:KVM_linux_win_PAE_gPAE 1 1 0 0 0
:KVM_two_winxp_PAE_gPAE 1 1 0 0 0
:KVM_four_sguest_PAE_gPA 1 1 0 0 0
:KVM_1500M_guest_PAE_gPA 1 1 0 0 0
:KVM_LM_Continuity_PAE_g 1 0 1 0 0
:KVM_LM_SMP_PAE_gPAE 1 0 1 0 0
:KVM_SR_Continuity_PAE_g 1 0 1 0 0
gtest 16 16 0 0 0
:ltp_nightly_PAE_gPAE 1 1 0 0 0
:boot_up_acpi_PAE_gPAE 1 1 0 0 0
:boot_up_acpi_xp_PAE_gPA 1 1 0 0 0
:boot_up_vista_PAE_gPAE 1 1 0 0 0
:reboot_xp_PAE_gPAE 1 1 0 0 0
:boot_base_kernel_PAE_gP 1 1 0 0 0
:boot_up_acpi_win2k3_PAE 1 1 0 0 0
:boot_smp_acpi_win2k3_PA 1 1 0 0 0
:boot_smp_acpi_win2k_PAE 1 1 0 0 0
:boot_up_win2008_PAE_gPA 1 1 0 0 0
:boot_up_acpi_win2k_PAE_ 1 1 0 0 0
:boot_smp_acpi_xp_PAE_gP 1 1 0 0 0
:boot_up_noacpi_win2k_PA 1 1 0 0 0
:boot_smp_vista_PAE_gPAE 1 1 0 0 0
:boot_smp_win2008_PAE_gP 1 1 0 0 0
:kb_nightly_PAE_gPAE 1 1 0 0 0
=====================================================================
Total 24 21 3 0 0
Report Summary on IA32e
Summary Test Report of Last Session
=====================================================================
Total Pass Fail NoResult Crash
=====================================================================
control_panel 17 14 3 0 0
gtest 23 23 0 0 0
=====================================================================
control_panel 17 14 3 0 0
:KVM_4G_guest_64_g32e 1 1 0 0 0
:KVM_four_sguest_64_gPAE 1 1 0 0 0
:KVM_LM_SMP_64_g32e 1 1 0 0 0
:KVM_linux_win_64_gPAE 1 1 0 0 0
:KVM_LM_SMP_64_gPAE 1 1 0 0 0
:KVM_SR_Continuity_64_gP 1 0 1 0 0
:KVM_four_sguest_64_g32e 1 1 0 0 0
:KVM_four_dguest_64_gPAE 1 1 0 0 0
:KVM_SR_SMP_64_gPAE 1 0 1 0 0
:KVM_LM_Continuity_64_g3 1 1 0 0 0
:KVM_1500M_guest_64_gPAE 1 1 0 0 0
:KVM_LM_Continuity_64_gP 1 1 0 0 0
:KVM_1500M_guest_64_g32e 1 1 0 0 0
:KVM_SR_Continuity_64_g3 1 0 1 0 0
:KVM_two_winxp_64_gPAE 1 1 0 0 0
:KVM_256M_guest_64_gPAE 1 1 0 0 0
:KVM_256M_guest_64_g32e 1 1 0 0 0
gtest 23 23 0 0 0
:boot_up_acpi_64_gPAE 1 1 0 0 0
:boot_up_noacpi_xp_64_gP 1 1 0 0 0
:boot_base_kernel_64_gPA 1 1 0 0 0
:boot_up_vista_64_g32e 1 1 0 0 0
:boot_smp_acpi_win2k3_64 1 1 0 0 0
:boot_smp_acpi_win2k_64_ 1 1 0 0 0
:kb_nightly_64_gPAE 1 1 0 0 0
:boot_up_acpi_xp_64_g32e 1 1 0 0 0
:boot_up_noacpi_win2k_64 1 1 0 0 0
:boot_smp_acpi_xp_64_gPA 1 1 0 0 0
:boot_smp_acpi_xp_64_g32 1 1 0 0 0
:boot_smp_vista_64_gPAE 1 1 0 0 0
:boot_up_acpi_64_g32e 1 1 0 0 0
:boot_base_kernel_64_g32 1 1 0 0 0
:kb_nightly_64_g32e 1 1 0 0 0
:boot_up_acpi_win2k3_64_ 1 1 0 0 0
:boot_up_win2008_64_gPAE 1 1 0 0 0
:ltp_nightly_64_g32e 1 1 0 0 0
:boot_smp_win2008_64_g32 1 1 0 0 0
:boot_up_vista_64_gPAE 1 1 0 0 0
:ltp_nightly_64_gPAE 1 1 0 0 0
:boot_smp_acpi_win2k3_64 1 1 0 0 0
:boot_up_noacpi_win2k3_6 1 1 0 0 0
=====================================================================
Total 40 37 3 0 0
Test environment
================================================
Platform B
Nehalem
CPU 8
Memory size 4G'
Summary Test Report of Last Session
=====================================================================
Total Pass Fail NoResult Crash
=====================================================================
control_panel_ept_vpid 5 5 0 0 0
control_panel_ept 3 3 0 0 0
control_panel 3 3 0 0 0
control_panel_vpid 3 3 0 0 0
gtest_vpid 4 4 0 0 0
gtest_ept 2 2 0 0 0
gtest 4 4 0 0 0
gtest_ept_vpid 14 14 0 0 0
=====================================================================
control_panel_ept_vpid 5 5 0 0 0
:KVM_256M_guest_PAE_gPAE 1 1 0 0 0
:KVM_four_sguest_PAE_gPA 1 1 0 0 0
:KVM_1500M_guest_PAE_gPA 1 1 0 0 0
:KVM_linux_win_PAE_gPAE 1 1 0 0 0
:KVM_two_winxp_PAE_gPAE 1 1 0 0 0
control_panel_ept 3 3 0 0 0
:KVM_four_sguest_PAE_gPA 1 1 0 0 0
:KVM_1500M_guest_PAE_gPA 1 1 0 0 0
:KVM_linux_win_PAE_gPAE 1 1 0 0 0
control_panel 3 3 0 0 0
:KVM_four_sguest_PAE_gPA 1 1 0 0 0
:KVM_1500M_guest_PAE_gPA 1 1 0 0 0
:KVM_linux_win_PAE_gPAE 1 1 0 0 0
control_panel_vpid 3 3 0 0 0
:KVM_four_sguest_PAE_gPA 1 1 0 0 0
:KVM_1500M_guest_PAE_gPA 1 1 0 0 0
:KVM_linux_win_PAE_gPAE 1 1 0 0 0
gtest_vpid 4 4 0 0 0
:boot_smp_vista_PAE_gPAE 1 1 0 0 0
:reboot_xp_PAE_gPAE 1 1 0 0 0
:boot_base_kernel_PAE_gP 1 1 0 0 0
:boot_up_win2008_PAE_gPA 1 1 0 0 0
gtest_ept 2 2 0 0 0
:boot_smp_acpi_win2k3_PA 1 1 0 0 0
:boot_smp_acpi_win2k_PAE 1 1 0 0 0
gtest 4 4 0 0 0
:boot_smp_win2008_PAE_gP 1 1 0 0 0
:boot_up_vista_PAE_gPAE 1 1 0 0 0
:boot_smp_acpi_xp_PAE_gP 1 1 0 0 0
:boot_up_noacpi_win2k_PA 1 1 0 0 0
gtest_ept_vpid 14 14 0 0 0
:boot_up_acpi_PAE_gPAE 1 1 0 0 0
:ltp_nightly_PAE_gPAE 1 1 0 0 0
:boot_up_acpi_xp_PAE_gPA 1 1 0 0 0
:boot_up_vista_PAE_gPAE 1 1 0 0 0
:boot_base_kernel_PAE_gP 1 1 0 0 0
:boot_up_acpi_win2k3_PAE 1 1 0 0 0
:boot_smp_acpi_win2k3_PA 1 1 0 0 0
:boot_up_win2008_PAE_gPA 1 1 0 0 0
:boot_up_acpi_win2k_PAE_ 1 1 0 0 0
:boot_smp_acpi_win2k_PAE 1 1 0 0 0
:boot_smp_acpi_xp_PAE_gP 1 1 0 0 0
:boot_up_noacpi_win2k_PA 1 1 0 0 0
:boot_smp_win2008_PAE_gP 1 1 0 0 0
:kb_nightly_PAE_gPAE 1 1 0 0 0
=====================================================================
Total 38 38 0 0 0
Report Summary on IA32e
Summary Test Report of Last Session
=====================================================================
Total Pass Fail NoResult Crash
=====================================================================
control_panel_ept_vpid 16 13 3 0 0
control_panel_ept 5 5 0 0 0
control_panel 5 5 0 0 0
control_panel_vpid 7 6 1 0 0
gtest_vpid 4 4 0 0 0
gtest_ept 1 1 0 0 0
gtest 6 6 0 0 0
vtd 7 5 2 0 0
vtd_ept_vpid 14 6 8 0 0
gtest_ept_vpid 19 19 0 0 0
=====================================================================
control_panel_ept_vpid 16 13 3 0 0
:KVM_SR_SMP_64_gPAE 1 0 1 0 0
:KVM_LM_Continuity_64_g3 1 1 0 0 0
:KVM_four_sguest_64_gPAE 1 1 0 0 0
:KVM_linux_win_64_gPAE 1 1 0 0 0
:KVM_LM_SMP_64_g32e 1 1 0 0 0
:KVM_1500M_guest_64_gPAE 1 1 0 0 0
:KVM_LM_Continuity_64_gP 1 1 0 0 0
:KVM_SR_Continuity_64_gP 1 0 1 0 0
:KVM_LM_SMP_64_gPAE 1 1 0 0 0
:KVM_1500M_guest_64_g32e 1 1 0 0 0
:KVM_256M_guest_64_gPAE 1 1 0 0 0
:KVM_two_winxp_64_gPAE 1 1 0 0 0
:KVM_SR_Continuity_64_g3 1 0 1 0 0
:KVM_256M_guest_64_g32e 1 1 0 0 0
:KVM_four_sguest_64_g32e 1 1 0 0 0
:KVM_four_dguest_64_gPAE 1 1 0 0 0
control_panel_ept 5 5 0 0 0
:KVM_linux_win_64_g32e 1 1 0 0 0
:KVM_1500M_guest_64_g32e 1 1 0 0 0
:KVM_four_sguest_64_gPAE 1 1 0 0 0
:KVM_LM_SMP_64_g32e 1 1 0 0 0
:KVM_1500M_guest_64_gPAE 1 1 0 0 0
control_panel 5 5 0 0 0
:KVM_1500M_guest_64_g32e 1 1 0 0 0
:KVM_linux_win_64_gPAE 1 1 0 0 0
:KVM_four_sguest_64_g32e 1 1 0 0 0
:KVM_LM_SMP_64_g32e 1 1 0 0 0
:KVM_1500M_guest_64_gPAE 1 1 0 0 0
control_panel_vpid 7 6 1 0 0
:KVM_linux_win_64_g32e 1 1 0 0 0
:KVM_SR_SMP_64_gPAE 1 0 1 0 0
:KVM_1500M_guest_64_g32e 1 1 0 0 0
:KVM_four_sguest_64_gPAE 1 1 0 0 0
:KVM_two_winxp_64_gPAE 1 1 0 0 0
:KVM_LM_SMP_64_g32e 1 1 0 0 0
:KVM_1500M_guest_64_gPAE 1 1 0 0 0
gtest_vpid 4 4 0 0 0
:boot_smp_win2008_64_g32 1 1 0 0 0
:boot_up_vista_64_gPAE 1 1 0 0 0
:boot_smp_acpi_win2k3_64 1 1 0 0 0
:boot_smp_acpi_xp_64_g32 1 1 0 0 0
gtest_ept 1 1 0 0 0
:boot_smp_acpi_xp_64_g32 1 1 0 0 0
gtest 6 6 0 0 0
:boot_smp_win2008_64_g32 1 1 0 0 0
:boot_base_kernel_64_g32 1 1 0 0 0
:boot_smp_acpi_xp_64_gPA 1 1 0 0 0
:boot_smp_acpi_win2k_64_ 1 1 0 0 0
:boot_up_win2008_64_gPAE 1 1 0 0 0
:boot_smp_vista_64_g32e 1 1 0 0 0
vtd 7 5 2 0 0
:one_pcie_scp_64_g32e 1 1 0 0 0
:one_pcie_up_nomsi_64_g3 1 1 0 0 0
:one_pcie_up_xp_64_g32e 1 0 1 0 0
:one_pcie_up_64_g32e 1 1 0 0 0
:lm_pcie_up_64_g32e 1 0 1 0 0
:two_dev_up_64_g32e 1 1 0 0 0
:hp_pcie_up_64_g32e 1 1 0 0 0
vtd_ept_vpid 14 6 8 0 0
:one_pcie_up_nomsi_64_g3 1 1 0 0 0
:one_pcie_up_xp_64_g32e 1 0 1 0 0
:one_pcie_scp_64_gPAE 1 1 0 0 0
:one_pcie_up_64_g32e 1 1 0 0 0
:lm_pcie_up_64_g32e 1 0 1 0 0
:hp_pcie_up_xp_64_g32e 1 0 1 0 0
:two_dev_up_64_g32e 1 0 1 0 0
:one_pcie_scp_64_g32e 1 1 0 0 0
:one_pcie_up_xp_64_gPAE 1 0 1 0 0
:one_pcie_smp_xp_64_g32e 1 0 1 0 0
:hp_pcie_smp_64_g32e 1 0 1 0 0
:one_pcie_smp_64_g32e 1 1 0 0 0
:hp_pcie_up_64_g32e 1 0 1 0 0
:one_pcie_up_64_gPAE 1 1 0 0 0
gtest_ept_vpid 19 19 0 0 0
:boot_up_acpi_64_gPAE 1 1 0 0 0
:boot_up_noacpi_xp_64_gP 1 1 0 0 0
:boot_base_kernel_64_gPA 1 1 0 0 0
:boot_smp_acpi_win2k3_64 1 1 0 0 0
:boot_smp_acpi_win2k_64_ 1 1 0 0 0
:kb_nightly_64_gPAE 1 1 0 0 0
:boot_up_acpi_xp_64_g32e 1 1 0 0 0
:boot_up_noacpi_win2k_64 1 1 0 0 0
:boot_smp_acpi_xp_64_gPA 1 1 0 0 0
:boot_smp_acpi_xp_64_g32 1 1 0 0 0
:boot_up_acpi_64_g32e 1 1 0 0 0
:boot_base_kernel_64_g32 1 1 0 0 0
:kb_nightly_64_g32e 1 1 0 0 0
:boot_up_acpi_win2k3_64_ 1 1 0 0 0
:boot_up_win2008_64_gPAE 1 1 0 0 0
:ltp_nightly_64_g32e 1 1 0 0 0
:boot_smp_win2008_64_g32 1 1 0 0 0
:boot_smp_acpi_win2k3_64 1 1 0 0 0
:boot_up_noacpi_win2k3_6 1 1 0 0 0
=====================================================================
Total 84 70 14 0 0
Best Regards,
Jiajun
^ permalink raw reply [flat|nested] 9+ messages in thread* Re: Biweekly KVM Test report, kernel 7597f... qemu 1c45e...
2009-08-17 1:32 Biweekly KVM Test report, kernel 7597f... qemu 1c45e Xu, Jiajun
@ 2009-08-17 12:17 ` Avi Kivity
2009-08-19 2:14 ` Xu, Jiajun
0 siblings, 1 reply; 9+ messages in thread
From: Avi Kivity @ 2009-08-17 12:17 UTC (permalink / raw)
To: Xu, Jiajun; +Cc: 'kvm-devel'
On 08/17/2009 04:32 AM, Xu, Jiajun wrote:
> 5. failure to migrate guests with more than 4GB of RAM
> https://sourceforge.net/tracker/index.php?func=detail&aid=1971512&group_id=180599&atid=893831
>
Now that I have a large host, I tested this, and it works well. When
was this most recently tested?
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 9+ messages in thread
* RE: Biweekly KVM Test report, kernel 7597f... qemu 1c45e...
2009-08-17 12:17 ` Avi Kivity
@ 2009-08-19 2:14 ` Xu, Jiajun
2009-08-19 8:08 ` Avi Kivity
0 siblings, 1 reply; 9+ messages in thread
From: Xu, Jiajun @ 2009-08-19 2:14 UTC (permalink / raw)
To: 'Avi Kivity'; +Cc: 'kvm-devel'
On Monday, August 17, 2009 8:18 PM Avi Kivity wrote:
> On 08/17/2009 04:32 AM, Xu, Jiajun wrote:
>> 5. failure to migrate guests with more than 4GB of RAM
>>
> https://sourceforge.net/tracker/index.php?func=detail&aid=19715
> 12&group_id=180599&atid=893831
>>
>
> Now that I have a large host, I tested this, and it works well. When
> was this most recently tested?
I tried this with latest commit, sometimes linux guest can do migration with more than 4G memory.
But sometimes I found guest will hang after migration and on host console it will print "Unknown savevm section type 40, load of migration failed".
Did you meet such issue? I met such error with both linux and windows sometimes.
Best Regards
Jiajun
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Biweekly KVM Test report, kernel 7597f... qemu 1c45e...
2009-08-19 2:14 ` Xu, Jiajun
@ 2009-08-19 8:08 ` Avi Kivity
2009-08-21 7:14 ` Xu, Jiajun
0 siblings, 1 reply; 9+ messages in thread
From: Avi Kivity @ 2009-08-19 8:08 UTC (permalink / raw)
To: Xu, Jiajun; +Cc: 'kvm-devel'
On 08/19/2009 05:14 AM, Xu, Jiajun wrote:
> I tried this with latest commit, sometimes linux guest can do migration with more than 4G memory.
> But sometimes I found guest will hang after migration and on host console it will print "Unknown savevm section type 40, load of migration failed".
>
> Did you meet such issue? I met such error with both linux and windows sometimes.
>
I haven't seen it. How many migrations does it take to reproduce?
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 9+ messages in thread
* RE: Biweekly KVM Test report, kernel 7597f... qemu 1c45e...
2009-08-19 8:08 ` Avi Kivity
@ 2009-08-21 7:14 ` Xu, Jiajun
2009-08-25 10:20 ` Avi Kivity
0 siblings, 1 reply; 9+ messages in thread
From: Xu, Jiajun @ 2009-08-21 7:14 UTC (permalink / raw)
To: 'Avi Kivity'; +Cc: 'kvm-devel'
On Wednesday, August 19, 2009 4:09 PM Avi Kivity wrote:
> On 08/19/2009 05:14 AM, Xu, Jiajun wrote:
>> I tried this with latest commit, sometimes linux guest can
> do migration with more than 4G memory.
>> But sometimes I found guest will hang after migration and on
> host console it will print "Unknown savevm section type 40,
> load of migration failed".
>>
>> Did you meet such issue? I met such error with both linux and
>> windows sometimes.
>>
>
> I haven't seen it. How many migrations does it take to reproduce?
I found the migration failure is caused by a configuration mistake on our testing machine. Now 64-bit migration works well.
But I found on PAE host, migration will cause host kernel call trace.
Pid: 12053, comm: qemu-system-x86 Tainted: G D (2.6.31-rc2 #1)
EIP: 0060:[<c043e023>] EFLAGS: 00210202 CPU: 0
EIP is at lock_hrtimer_base+0x11/0x33
EAX: f5d1541c EBX: 00000010 ECX: 000004a9 EDX: f5c1bc7c
ESI: f5d1541c EDI: f5c1bc7c EBP: f5c1bc74 ESP: f5c1bc68
DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068
Process qemu-system-x86 (pid: 12053, ti=f5c1b000 task=f61cb410
task.ti=f5c1b000)
Stack:
f5d1541c ffffffff 000004a9 f5c1bc8c c043e097 f9b7f7cb f5d1541c 00000000
<0> 000004a9 f5c1bc98 c043e0f0 f5d153d0 f5c1bcb0 f9b9b4df 00000000 bfd8a102
<0> f3c1e000 f5d15440 f5c1bcc0 f9b9b56d bfd8a10c f3c1e000 f5c1bda0 f9b8c26b
Call Trace:
[<c043e097>] ? hrtimer_try_to_cancel+0x16/0x62
[<f9b7f7cb>] ? kvm_flush_remote_tlbs+0xd/0x1a [kvm]
[<c043e0f0>] ? hrtimer_cancel+0xd/0x18
[<f9b9b4df>] ? pit_load_count+0x98/0x9e [kvm]
[<f9b9b56d>] ? kvm_pit_load_count+0x21/0x35 [kvm]
[<f9b8c26b>] ? kvm_arch_vm_ioctl+0x91e/0x9f5 [kvm]
[<f9b7f3b4>] ? kvm_set_memory_region+0x2f/0x37 [kvm]
[<f9b809c7>] ? kvm_vm_ioctl+0xafb/0xb45 [kvm]
[<c043ddf8>] ? enqueue_hrtimer+0x5d/0x68
[<c043e258>] ? __hrtimer_start_range_ns+0x15d/0x168
[<c043e272>] ? hrtimer_start+0xf/0x11
[<f9cd51cd>] ? vmx_vcpu_put+0x8/0xa [kvm_intel]
[<f9b83e8b>] ? kvm_arch_vcpu_put+0x16/0x19 [kvm]
[<f9b8b943>] ? kvm_arch_vcpu_ioctl+0x7d5/0x7df [kvm]
[<c041f1e5>] ? kmap_atomic+0x14/0x16
[<c046ec2f>] ? get_page_from_freelist+0x27c/0x2d2
[<c046ed72>] ? __alloc_pages_nodemask+0xd7/0x402
[<c04714a6>] ? lru_cache_add_lru+0x22/0x24
[<f9b7f6b5>] ? kvm_dev_ioctl+0x22d/0x250 [kvm]
[<f9b7fecc>] ? kvm_vm_ioctl+0x0/0xb45 [kvm]
[<c049a9ab>] ? vfs_ioctl+0x22/0x67
[<c049af1d>] ? do_vfs_ioctl+0x46c/0x4b7
[<c05fb0fb>] ? sys_recv+0x18/0x1a
[<c0446bef>] ? sys_futex+0xed/0x103
[<c049afa8>] ? sys_ioctl+0x40/0x5a
[<c04028a4>] ? sysenter_do_call+0x12/0x22
Code: c0 ff 45 e4 83 45 dc 24 83 7d e4 02 0f 85 cf fe ff ff 8d 65 f4 5b 5e 5f
5d c3 55 89 e5 57 89 d7 56 89 c6 53 8b 5e 20 85 db 74 17 <8b> 03 e8 0e dd 23 00
89 07 3b 5e 20 74 0d 89 c2 8b 03 e8 8a dd
EIP: [<c043e023>] lock_hrtimer_base+0x11/0x33 SS:ESP 0068:f5c1bc68
CR2: 0000000000000010
---[ end trace f747f57e7d1b76c8 ]---
Best Regards
Jiajun
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Biweekly KVM Test report, kernel 7597f... qemu 1c45e...
2009-08-21 7:14 ` Xu, Jiajun
@ 2009-08-25 10:20 ` Avi Kivity
2009-08-25 12:29 ` KVM: PIT: fix pit_state copy in set_pit2/get_pit2 Marcelo Tosatti
0 siblings, 1 reply; 9+ messages in thread
From: Avi Kivity @ 2009-08-25 10:20 UTC (permalink / raw)
To: Xu, Jiajun, Marcelo Tosatti; +Cc: 'kvm-devel'
On 08/21/2009 10:14 AM, Xu, Jiajun wrote:
> I found the migration failure is caused by a configuration mistake on our testing machine. Now 64-bit migration works well.
> But I found on PAE host, migration will cause host kernel call trace.
>
>
> Pid: 12053, comm: qemu-system-x86 Tainted: G D (2.6.31-rc2 #1)
> EIP: 0060:[<c043e023>] EFLAGS: 00210202 CPU: 0
> EIP is at lock_hrtimer_base+0x11/0x33
> EAX: f5d1541c EBX: 00000010 ECX: 000004a9 EDX: f5c1bc7c
> ESI: f5d1541c EDI: f5c1bc7c EBP: f5c1bc74 ESP: f5c1bc68
> DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068
> Process qemu-system-x86 (pid: 12053, ti=f5c1b000 task=f61cb410
> task.ti=f5c1b000)
> Stack:
> f5d1541c ffffffff 000004a9 f5c1bc8c c043e097 f9b7f7cb f5d1541c 00000000
> <0> 000004a9 f5c1bc98 c043e0f0 f5d153d0 f5c1bcb0 f9b9b4df 00000000 bfd8a102
> <0> f3c1e000 f5d15440 f5c1bcc0 f9b9b56d bfd8a10c f3c1e000 f5c1bda0 f9b8c26b
> Call Trace:
> [<c043e097>] ? hrtimer_try_to_cancel+0x16/0x62
> [<f9b7f7cb>] ? kvm_flush_remote_tlbs+0xd/0x1a [kvm]
> [<c043e0f0>] ? hrtimer_cancel+0xd/0x18
> [<f9b9b4df>] ? pit_load_count+0x98/0x9e [kvm]
> [<f9b9b56d>] ? kvm_pit_load_count+0x21/0x35 [kvm]
>
Marcelo, any idea? Looks like the PIT was reloaded, but the hrtimer
wasn't initialized?
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 9+ messages in thread
* KVM: PIT: fix pit_state copy in set_pit2/get_pit2
2009-08-25 10:20 ` Avi Kivity
@ 2009-08-25 12:29 ` Marcelo Tosatti
2009-08-25 12:33 ` Avi Kivity
0 siblings, 1 reply; 9+ messages in thread
From: Marcelo Tosatti @ 2009-08-25 12:29 UTC (permalink / raw)
To: Avi Kivity; +Cc: Xu, Jiajun, 'kvm-devel'
The kvm_pit_state2 structure contains extra space, so the memcpy
in kvm_vm_ioctl_set_pit2 corrupts kvm->arch.vpit->pit_state.
Fix it by memcpy'ing the channel information and assigning flags
manually.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 0f22f72..35e7fc5 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2095,7 +2095,9 @@ static int kvm_vm_ioctl_get_pit2(struct kvm *kvm, struct kvm_pit_state2 *ps)
int r = 0;
mutex_lock(&kvm->arch.vpit->pit_state.lock);
- memcpy(ps, &kvm->arch.vpit->pit_state, sizeof(struct kvm_pit_state2));
+ memcpy(ps->channels, &kvm->arch.vpit->pit_state.channels,
+ sizeof(ps->channels));
+ ps->flags = kvm->arch.vpit->pit_state.flags;
mutex_unlock(&kvm->arch.vpit->pit_state.lock);
return r;
}
@@ -2109,7 +2111,9 @@ static int kvm_vm_ioctl_set_pit2(struct kvm *kvm, struct kvm_pit_state2 *ps)
cur_legacy = ps->flags & KVM_PIT_FLAGS_HPET_LEGACY;
if (!prev_legacy && cur_legacy)
start = 1;
- memcpy(&kvm->arch.vpit->pit_state, ps, sizeof(struct kvm_pit_state2));
+ memcpy(&kvm->arch.vpit->pit_state.channels, &ps->channels,
+ sizeof(kvm->arch.vpit->pit_state.channels));
+ kvm->arch.vpit->pit_state.flags = ps->flags;
kvm_pit_load_count(kvm, 0, kvm->arch.vpit->pit_state.channels[0].count, start);
mutex_unlock(&kvm->arch.vpit->pit_state.lock);
return r;
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: KVM: PIT: fix pit_state copy in set_pit2/get_pit2
2009-08-25 12:29 ` KVM: PIT: fix pit_state copy in set_pit2/get_pit2 Marcelo Tosatti
@ 2009-08-25 12:33 ` Avi Kivity
2009-08-27 1:23 ` Xu, Jiajun
0 siblings, 1 reply; 9+ messages in thread
From: Avi Kivity @ 2009-08-25 12:33 UTC (permalink / raw)
To: Marcelo Tosatti; +Cc: Xu, Jiajun, 'kvm-devel'
On 08/25/2009 03:29 PM, Marcelo Tosatti wrote:
> The kvm_pit_state2 structure contains extra space, so the memcpy
> in kvm_vm_ioctl_set_pit2 corrupts kvm->arch.vpit->pit_state.
>
> Fix it by memcpy'ing the channel information and assigning flags
> manually.
>
Good catch; applied.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 9+ messages in thread
* RE: KVM: PIT: fix pit_state copy in set_pit2/get_pit2
2009-08-25 12:33 ` Avi Kivity
@ 2009-08-27 1:23 ` Xu, Jiajun
0 siblings, 0 replies; 9+ messages in thread
From: Xu, Jiajun @ 2009-08-27 1:23 UTC (permalink / raw)
To: 'Avi Kivity', 'Marcelo Tosatti'; +Cc: 'kvm-devel'
On Tuesday, August 25, 2009 8:33 PM Avi Kivity wrote:
> On 08/25/2009 03:29 PM, Marcelo Tosatti wrote:
>> The kvm_pit_state2 structure contains extra space, so the memcpy
>> in kvm_vm_ioctl_set_pit2 corrupts kvm->arch.vpit->pit_state.
>>
>> Fix it by memcpy'ing the channel information and assigning flags
>> manually.
>>
>
> Good catch; applied.
I verified with kvm commit: 323d3b06db8bf2d8e4c5ed1a390668ae7b1b84bf, the issue has gone with the fix.
Best Regards
Jiajun
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2009-08-27 1:23 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-08-17 1:32 Biweekly KVM Test report, kernel 7597f... qemu 1c45e Xu, Jiajun
2009-08-17 12:17 ` Avi Kivity
2009-08-19 2:14 ` Xu, Jiajun
2009-08-19 8:08 ` Avi Kivity
2009-08-21 7:14 ` Xu, Jiajun
2009-08-25 10:20 ` Avi Kivity
2009-08-25 12:29 ` KVM: PIT: fix pit_state copy in set_pit2/get_pit2 Marcelo Tosatti
2009-08-25 12:33 ` Avi Kivity
2009-08-27 1:23 ` Xu, Jiajun
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).