* [PATCH 0/4 v2] kvm: rework KVM mmu_shrink() code
@ 2010-08-20 1:10 Tim Pepper
2010-08-22 15:29 ` Avi Kivity
2010-08-23 10:22 ` Avi Kivity
0 siblings, 2 replies; 11+ messages in thread
From: Tim Pepper @ 2010-08-20 1:10 UTC (permalink / raw)
To: Avi Kivity, Marcelo Tosatti, Lai Jiangshan, Dave Hansen, LKML, k
The following series is the four patches Dave Hansen had queued for test
as mentioned last week in the thread:
"[PATCH] kvm: make mmu_shrink() fit shrinker's requirement"
Last week just before leaving for vacation Dave had noted in that thread
that these four were ready to merge based on our perf team's testing
finally having wrapped up. But it turns out he hadn't actually posted
them after refactoring in response to comments back in June...
I'm covering for him in his absence and had previously reviewed this set.
This version contains fixes in response to the comments in June. The
patches are pulled straight from Dave's development tree, as tested, with
a minor build/merge change to patch #3 which was otherwise inadvertantly
re-introducing an (unused) variable that Avi more recently had removed.
Compared to the previous version from June:
- patch #3 addresses Marcelo's comment about a double deaccounting
of kvm->arch.n_used_mmu_pages
- patch #4 includes protection of the used mmu page counts in response to
Avi's comments
Avi: if Dave's use of a per cpu counter in the refactored patch #4 is
acceptable to you, then the series is for merging.
--
Tim Pepper <lnxninja@linux.vnet.ibm.com>
IBM Linux Technology Center
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 0/4 v2] kvm: rework KVM mmu_shrink() code
2010-08-20 1:10 [PATCH 0/4 v2] kvm: rework KVM mmu_shrink() code Tim Pepper
@ 2010-08-22 15:29 ` Avi Kivity
2010-08-23 10:22 ` Avi Kivity
1 sibling, 0 replies; 11+ messages in thread
From: Avi Kivity @ 2010-08-22 15:29 UTC (permalink / raw)
To: Tim Pepper; +Cc: Marcelo Tosatti, Lai Jiangshan, Dave Hansen, LKML, kvm
On 08/20/2010 04:10 AM, Tim Pepper wrote:
> The following series is the four patches Dave Hansen had queued for test
> as mentioned last week in the thread:
> "[PATCH] kvm: make mmu_shrink() fit shrinker's requirement"
> Last week just before leaving for vacation Dave had noted in that thread
> that these four were ready to merge based on our perf team's testing
> finally having wrapped up. But it turns out he hadn't actually posted
> them after refactoring in response to comments back in June...
>
> I'm covering for him in his absence and had previously reviewed this set.
> This version contains fixes in response to the comments in June. The
> patches are pulled straight from Dave's development tree, as tested, with
> a minor build/merge change to patch #3 which was otherwise inadvertantly
> re-introducing an (unused) variable that Avi more recently had removed.
>
> Compared to the previous version from June:
> - patch #3 addresses Marcelo's comment about a double deaccounting
> of kvm->arch.n_used_mmu_pages
> - patch #4 includes protection of the used mmu page counts in response to
> Avi's comments
>
> Avi: if Dave's use of a per cpu counter in the refactored patch #4 is
> acceptable to you, then the series is for merging.
>
Applied, thanks for taking care of this.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 0/4 v2] kvm: rework KVM mmu_shrink() code
2010-08-20 1:10 [PATCH 0/4 v2] kvm: rework KVM mmu_shrink() code Tim Pepper
2010-08-22 15:29 ` Avi Kivity
@ 2010-08-23 10:22 ` Avi Kivity
2010-08-23 10:27 ` Avi Kivity
1 sibling, 1 reply; 11+ messages in thread
From: Avi Kivity @ 2010-08-23 10:22 UTC (permalink / raw)
To: Tim Pepper; +Cc: Marcelo Tosatti, Lai Jiangshan, Dave Hansen, LKML, kvm
On 08/20/2010 04:10 AM, Tim Pepper wrote:
> The following series is the four patches Dave Hansen had queued for test
> as mentioned last week in the thread:
> "[PATCH] kvm: make mmu_shrink() fit shrinker's requirement"
> Last week just before leaving for vacation Dave had noted in that thread
> that these four were ready to merge based on our perf team's testing
> finally having wrapped up. But it turns out he hadn't actually posted
> them after refactoring in response to comments back in June...
>
> I'm covering for him in his absence and had previously reviewed this set.
> This version contains fixes in response to the comments in June. The
> patches are pulled straight from Dave's development tree, as tested, with
> a minor build/merge change to patch #3 which was otherwise inadvertantly
> re-introducing an (unused) variable that Avi more recently had removed.
>
> Compared to the previous version from June:
> - patch #3 addresses Marcelo's comment about a double deaccounting
> of kvm->arch.n_used_mmu_pages
> - patch #4 includes protection of the used mmu page counts in response to
> Avi's comments
>
> Avi: if Dave's use of a per cpu counter in the refactored patch #4 is
> acceptable to you, then the series is for merging.
>
I see a lot of soft lockups with this patchset:
BUG: soft lockup - CPU#0 stuck for 61s! [qemu:1917]
Modules linked in: netconsole configfs p4_clockmod freq_table
speedstep_lib kvm_intel kvm e1000e i2c_i801 i2c_core microcode serio_raw
[last unloaded: mperf]
CPU 0
Modules linked in: netconsole configfs p4_clockmod freq_table
speedstep_lib kvm_intel kvm e1000e i2c_i801 i2c_core microcode serio_raw
[last unloaded: mperf]
Pid: 1917, comm: qemu Not tainted 2.6.35 #253
TYAN-Tempest-i5000VS-S5372/TYAN Transport GT20-B5372
RIP: 0010:[<ffffffffa007c1cd>] [<ffffffffa007c1cd>]
kvm_mmu_prepare_zap_page+0xb7/0x262 [kvm]
RSP: 0018:ffff880129c8dab8 EFLAGS: 00000202
RAX: fffffffffffff001 RBX: ffff880129c8daf8 RCX: ffff880128cd0278
RDX: fffffffffffff001 RSI: ffff88012a2a7140 RDI: ffff880128cd0000
RBP: ffffffff8100a40e R08: 0000000000000004 R09: 0000000000000004
R10: dead000000100100 R11: ffffffffa0067422 R12: 0000000000000010
R13: ffffffff813a0246 R14: ffffffffffffff10 R15: ffff880128cd0004
FS: 00007f9901018710(0000) GS:ffff880001a00000(0000) knlGS:0000000000000000
CS: 0010 DS: 002b ES: 002b CR0: 000000008005003b
CR2: 0000123400000000 CR3: 0000000128cf1000 CR4: 00000000000026f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process qemu (pid: 1917, threadinfo ffff880129c8c000, task ffff88012a0dc530)
Stack:
ffff880129c8db08 000000ad2a2a7140 ffff880128cd0000 ffff8801287bc000
<0> ffff880129c8db08 0000000000000002 0000000000000000 0000000000000001
<0> ffff880129c8db28 ffffffffa007ca37 ffff88010893b3c0 ffff88012a2a7be0
Call Trace:
[<ffffffffa007ca37>] ? __kvm_mmu_free_some_pages+0x2b/0x6a [kvm]
[<ffffffffa007ca93>] ? kvm_mmu_free_some_pages+0x1d/0x1f [kvm]
[<ffffffffa0080e81>] ? paging64_page_fault+0x18c/0x1c3 [kvm]
[<ffffffffa007d427>] ? reset_rsvds_bits_mask+0x12/0x150 [kvm]
[<ffffffffa007d70e>] ? init_kvm_mmu+0x1a9/0x33b [kvm]
[<ffffffffa007d8cf>] ? kvm_mmu_reset_context+0x24/0x28 [kvm]
[<ffffffffa0074e00>] ? emulate_instruction+0x291/0x2db [kvm]
[<ffffffffa008108c>] ? kvm_mmu_page_fault+0x1a/0x70 [kvm]
[<ffffffffa00bfff8>] ? handle_exception+0x191/0x2ec [kvm_intel]
[<ffffffffa00c0328>] ? vmx_handle_exit+0x1d5/0x207 [kvm_intel]
[<ffffffffa007674c>] ? kvm_arch_vcpu_ioctl_run+0x861/0xb09 [kvm]
[<ffffffffa0075834>] ? kvm_arch_vcpu_load+0x86/0xd2 [kvm]
[<ffffffffa0069c8a>] ? kvm_vcpu_ioctl+0x10d/0x4eb [kvm]
[<ffffffff810f9b3a>] ? do_sync_write+0xc6/0x103
[<ffffffff810c1f7d>] ? lru_cache_add_lru+0x22/0x24
[<ffffffff811061fb>] ? vfs_ioctl+0x2d/0xa1
[<ffffffff8110674e>] ? do_vfs_ioctl+0x468/0x4a1
[<ffffffff810f9925>] ? fsnotify_modify+0x67/0x6f
[<ffffffff811067c9>] ? sys_ioctl+0x42/0x65
[<ffffffff81009a42>] ? system_call_fastpath+0x16/0x1b
Something's wrong, probably some variable went negative.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 0/4 v2] kvm: rework KVM mmu_shrink() code
2010-08-23 10:22 ` Avi Kivity
@ 2010-08-23 10:27 ` Avi Kivity
2010-08-23 11:11 ` Xiaotian Feng
0 siblings, 1 reply; 11+ messages in thread
From: Avi Kivity @ 2010-08-23 10:27 UTC (permalink / raw)
To: Tim Pepper; +Cc: Marcelo Tosatti, Lai Jiangshan, Dave Hansen, LKML, kvm
On 08/23/2010 01:22 PM, Avi Kivity wrote:
>
>
> I see a lot of soft lockups with this patchset:
This is running the emulator.flat test case, with shadow paging. This
test triggers a lot (millions) of mmu mode switches.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 0/4 v2] kvm: rework KVM mmu_shrink() code
2010-08-23 10:27 ` Avi Kivity
@ 2010-08-23 11:11 ` Xiaotian Feng
2010-08-23 11:28 ` Avi Kivity
2010-08-24 2:07 ` Marcelo Tosatti
0 siblings, 2 replies; 11+ messages in thread
From: Xiaotian Feng @ 2010-08-23 11:11 UTC (permalink / raw)
To: Avi Kivity
Cc: Tim Pepper, Marcelo Tosatti, Lai Jiangshan, Dave Hansen, LKML,
kvm
On Mon, Aug 23, 2010 at 6:27 PM, Avi Kivity <avi@redhat.com> wrote:
> On 08/23/2010 01:22 PM, Avi Kivity wrote:
>>
>>
>> I see a lot of soft lockups with this patchset:
>
> This is running the emulator.flat test case, with shadow paging. This test
> triggers a lot (millions) of mmu mode switches.
>
Does following patch fix your issue?
Latest kvm mmu_shrink code rework makes kernel changes
kvm->arch.n_used_mmu_pages/
kvm->arch.n_max_mmu_pages at kvm_mmu_free_page/kvm_mmu_alloc_page,
which is called
by kvm_mmu_commit_zap_page. So the kvm->arch.n_used_mmu_pages or
kvm_mmu_available_pages(vcpu->kvm) is unchanged after kvm_mmu_commit_zap_page(),
This caused kvm_mmu_change_mmu_pages/__kvm_mmu_free_some_pages looping forever.
Moving kvm_mmu_commit_zap_page would make the while loop performs as normal.
---
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index f52a965..7e09a21 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1726,8 +1726,8 @@ void kvm_mmu_change_mmu_pages(struct kvm *kvm,
unsigned int goal_nr_mmu_pages)
struct kvm_mmu_page, link);
kvm_mmu_prepare_zap_page(kvm, page,
&invalid_list);
+ kvm_mmu_commit_zap_page(kvm, &invalid_list);
}
- kvm_mmu_commit_zap_page(kvm, &invalid_list);
goal_nr_mmu_pages = kvm->arch.n_used_mmu_pages;
}
@@ -2976,9 +2976,9 @@ void __kvm_mmu_free_some_pages(struct kvm_vcpu *vcpu)
sp = container_of(vcpu->kvm->arch.active_mmu_pages.prev,
struct kvm_mmu_page, link);
kvm_mmu_prepare_zap_page(vcpu->kvm, sp, &invalid_list);
+ kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list);
++vcpu->kvm->stat.mmu_recycled;
}
- kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list);
}
int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u32 error_code)
> --
> error compiling committee.c: too many arguments to function
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH 0/4 v2] kvm: rework KVM mmu_shrink() code
2010-08-23 11:11 ` Xiaotian Feng
@ 2010-08-23 11:28 ` Avi Kivity
2010-08-23 19:53 ` Tim Pepper
2010-08-24 2:07 ` Marcelo Tosatti
1 sibling, 1 reply; 11+ messages in thread
From: Avi Kivity @ 2010-08-23 11:28 UTC (permalink / raw)
To: Xiaotian Feng
Cc: Tim Pepper, Marcelo Tosatti, Lai Jiangshan, Dave Hansen, LKML,
kvm
On 08/23/2010 02:11 PM, Xiaotian Feng wrote:
> On Mon, Aug 23, 2010 at 6:27 PM, Avi Kivity<avi@redhat.com> wrote:
>> On 08/23/2010 01:22 PM, Avi Kivity wrote:
>>>
>>> I see a lot of soft lockups with this patchset:
>> This is running the emulator.flat test case, with shadow paging. This test
>> triggers a lot (millions) of mmu mode switches.
>>
> Does following patch fix your issue?
>
It does indeed!
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 0/4 v2] kvm: rework KVM mmu_shrink() code
2010-08-23 11:28 ` Avi Kivity
@ 2010-08-23 19:53 ` Tim Pepper
0 siblings, 0 replies; 11+ messages in thread
From: Tim Pepper @ 2010-08-23 19:53 UTC (permalink / raw)
To: Avi Kivity
Cc: Xiaotian Feng, Marcelo Tosatti, Lai Jiangshan, Dave Hansen, LKML,
kvm
On Mon, Aug 23, 2010 at 4:28 AM, Avi Kivity <avi@redhat.com> wrote:
> On 08/23/2010 02:11 PM, Xiaotian Feng wrote:
>>
>> On Mon, Aug 23, 2010 at 6:27 PM, Avi Kivity<avi@redhat.com> wrote:
>>>
>>> On 08/23/2010 01:22 PM, Avi Kivity wrote:
>>>>
>>>> I see a lot of soft lockups with this patchset:
>>>
>>> This is running the emulator.flat test case, with shadow paging. This
>>> test
>>> triggers a lot (millions) of mmu mode switches.
>>>
>> Does following patch fix your issue?
>>
>
> It does indeed!
Thanks Xiaotian Feng!
Avi: here's also a minor whitespace fixup on top of the previous.
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index f83b941..0c56484 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1696,8 +1696,7 @@ void kvm_mmu_change_mmu_pages(struct kvm *kvm,
unsigned in
page = container_of(kvm->arch.active_mmu_pages.prev,
struct kvm_mmu_page, link);
- kvm_mmu_prepare_zap_page(kvm, page,
- &invalid_list);
+ kvm_mmu_prepare_zap_page(kvm, page, &invalid_list);
kvm_mmu_commit_zap_page(kvm, &invalid_list);
}
goal_nr_mmu_pages = kvm->arch.n_used_mmu_pages;
Tim
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 0/4 v2] kvm: rework KVM mmu_shrink() code
2010-08-23 11:11 ` Xiaotian Feng
2010-08-23 11:28 ` Avi Kivity
@ 2010-08-24 2:07 ` Marcelo Tosatti
2010-08-24 2:31 ` [PATCH -kvm] kvm: fix regression from " Xiaotian Feng
2010-08-24 9:50 ` [PATCH 0/4 v2] kvm: " Xiaotian Feng
1 sibling, 2 replies; 11+ messages in thread
From: Marcelo Tosatti @ 2010-08-24 2:07 UTC (permalink / raw)
To: Xiaotian Feng
Cc: Avi Kivity, Tim Pepper, Lai Jiangshan, Dave Hansen, LKML, kvm
On Mon, Aug 23, 2010 at 07:11:11PM +0800, Xiaotian Feng wrote:
> On Mon, Aug 23, 2010 at 6:27 PM, Avi Kivity <avi@redhat.com> wrote:
> > On 08/23/2010 01:22 PM, Avi Kivity wrote:
> >>
> >>
> >> I see a lot of soft lockups with this patchset:
> >
> > This is running the emulator.flat test case, with shadow paging. This test
> > triggers a lot (millions) of mmu mode switches.
> >
>
> Does following patch fix your issue?
>
> Latest kvm mmu_shrink code rework makes kernel changes
> kvm->arch.n_used_mmu_pages/
> kvm->arch.n_max_mmu_pages at kvm_mmu_free_page/kvm_mmu_alloc_page,
> which is called
> by kvm_mmu_commit_zap_page. So the kvm->arch.n_used_mmu_pages or
> kvm_mmu_available_pages(vcpu->kvm) is unchanged after kvm_mmu_commit_zap_page(),
> This caused kvm_mmu_change_mmu_pages/__kvm_mmu_free_some_pages looping forever.
> Moving kvm_mmu_commit_zap_page would make the while loop performs as normal.
>
> ---
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index f52a965..7e09a21 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -1726,8 +1726,8 @@ void kvm_mmu_change_mmu_pages(struct kvm *kvm,
> unsigned int goal_nr_mmu_pages)
> struct kvm_mmu_page, link);
> kvm_mmu_prepare_zap_page(kvm, page,
> &invalid_list);
> + kvm_mmu_commit_zap_page(kvm, &invalid_list);
> }
> - kvm_mmu_commit_zap_page(kvm, &invalid_list);
> goal_nr_mmu_pages = kvm->arch.n_used_mmu_pages;
> }
>
> @@ -2976,9 +2976,9 @@ void __kvm_mmu_free_some_pages(struct kvm_vcpu *vcpu)
> sp = container_of(vcpu->kvm->arch.active_mmu_pages.prev,
> struct kvm_mmu_page, link);
> kvm_mmu_prepare_zap_page(vcpu->kvm, sp, &invalid_list);
> + kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list);
> ++vcpu->kvm->stat.mmu_recycled;
> }
> - kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list);
> }
>
> int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u32 error_code)
Please resend with a signed-off-by, and proper subject for the patch.
Thanks
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH -kvm] kvm: fix regression from rework KVM mmu_shrink() code
2010-08-24 2:07 ` Marcelo Tosatti
@ 2010-08-24 2:31 ` Xiaotian Feng
2010-08-24 13:12 ` Marcelo Tosatti
2010-08-24 9:50 ` [PATCH 0/4 v2] kvm: " Xiaotian Feng
1 sibling, 1 reply; 11+ messages in thread
From: Xiaotian Feng @ 2010-08-24 2:31 UTC (permalink / raw)
To: kvm; +Cc: linux-kernel, Xiaotian Feng, Marcelo Tosatti, Dave Hansen,
Tim Pepper
Latest kvm mmu_shrink code rework makes kernel changes kvm->arch.n_used_mmu_pages/
kvm->arch.n_max_mmu_pages at kvm_mmu_free_page/kvm_mmu_alloc_page, which is called
by kvm_mmu_commit_zap_page. So the kvm->arch.n_used_mmu_pages or
kvm_mmu_available_pages(vcpu->kvm) is unchanged after kvm_mmu_prepare_zap_page(),
This caused kvm_mmu_change_mmu_pages/__kvm_mmu_free_some_pages loops forever.
Moving kvm_mmu_commit_zap_page would make the while loop performs as normal.
Reported-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Xiaotian Feng <dfeng@redhat.com>
Tested-by: Avi Kivity <avi@redhat.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Dave Hansen <dave@linux.vnet.ibm.com>
Cc: Tim Pepper <lnxninja@linux.vnet.ibm.com>
---
arch/x86/kvm/mmu.c | 7 +++----
1 files changed, 3 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index f52a965..0991de3 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1724,10 +1724,9 @@ void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned int goal_nr_mmu_pages)
page = container_of(kvm->arch.active_mmu_pages.prev,
struct kvm_mmu_page, link);
- kvm_mmu_prepare_zap_page(kvm, page,
- &invalid_list);
+ kvm_mmu_prepare_zap_page(kvm, page, &invalid_list);
+ kvm_mmu_commit_zap_page(kvm, &invalid_list);
}
- kvm_mmu_commit_zap_page(kvm, &invalid_list);
goal_nr_mmu_pages = kvm->arch.n_used_mmu_pages;
}
@@ -2976,9 +2975,9 @@ void __kvm_mmu_free_some_pages(struct kvm_vcpu *vcpu)
sp = container_of(vcpu->kvm->arch.active_mmu_pages.prev,
struct kvm_mmu_page, link);
kvm_mmu_prepare_zap_page(vcpu->kvm, sp, &invalid_list);
+ kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list);
++vcpu->kvm->stat.mmu_recycled;
}
- kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list);
}
int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u32 error_code)
--
1.7.2.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH 0/4 v2] kvm: rework KVM mmu_shrink() code
2010-08-24 2:07 ` Marcelo Tosatti
2010-08-24 2:31 ` [PATCH -kvm] kvm: fix regression from " Xiaotian Feng
@ 2010-08-24 9:50 ` Xiaotian Feng
1 sibling, 0 replies; 11+ messages in thread
From: Xiaotian Feng @ 2010-08-24 9:50 UTC (permalink / raw)
To: Marcelo Tosatti
Cc: Avi Kivity, Tim Pepper, Lai Jiangshan, Dave Hansen, LKML, kvm
On Tue, Aug 24, 2010 at 10:07 AM, Marcelo Tosatti <mtosatti@redhat.com> wrote:
> On Mon, Aug 23, 2010 at 07:11:11PM +0800, Xiaotian Feng wrote:
>> On Mon, Aug 23, 2010 at 6:27 PM, Avi Kivity <avi@redhat.com> wrote:
>> > On 08/23/2010 01:22 PM, Avi Kivity wrote:
>> >>
>> >>
>> >> I see a lot of soft lockups with this patchset:
>> >
>> > This is running the emulator.flat test case, with shadow paging. This test
>> > triggers a lot (millions) of mmu mode switches.
>> >
>>
>> Does following patch fix your issue?
>>
>> Latest kvm mmu_shrink code rework makes kernel changes
>> kvm->arch.n_used_mmu_pages/
>> kvm->arch.n_max_mmu_pages at kvm_mmu_free_page/kvm_mmu_alloc_page,
>> which is called
>> by kvm_mmu_commit_zap_page. So the kvm->arch.n_used_mmu_pages or
>> kvm_mmu_available_pages(vcpu->kvm) is unchanged after kvm_mmu_commit_zap_page(),
>> This caused kvm_mmu_change_mmu_pages/__kvm_mmu_free_some_pages looping forever.
>> Moving kvm_mmu_commit_zap_page would make the while loop performs as normal.
>>
>> ---
>> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
>> index f52a965..7e09a21 100644
>> --- a/arch/x86/kvm/mmu.c
>> +++ b/arch/x86/kvm/mmu.c
>> @@ -1726,8 +1726,8 @@ void kvm_mmu_change_mmu_pages(struct kvm *kvm,
>> unsigned int goal_nr_mmu_pages)
>> struct kvm_mmu_page, link);
>> kvm_mmu_prepare_zap_page(kvm, page,
>> &invalid_list);
>> + kvm_mmu_commit_zap_page(kvm, &invalid_list);
>> }
>> - kvm_mmu_commit_zap_page(kvm, &invalid_list);
>> goal_nr_mmu_pages = kvm->arch.n_used_mmu_pages;
>> }
>>
>> @@ -2976,9 +2976,9 @@ void __kvm_mmu_free_some_pages(struct kvm_vcpu *vcpu)
>> sp = container_of(vcpu->kvm->arch.active_mmu_pages.prev,
>> struct kvm_mmu_page, link);
>> kvm_mmu_prepare_zap_page(vcpu->kvm, sp, &invalid_list);
>> + kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list);
>> ++vcpu->kvm->stat.mmu_recycled;
>> }
>> - kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list);
>> }
>>
>> int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u32 error_code)
>
> Please resend with a signed-off-by, and proper subject for the patch.
It's available at: https://patchwork.kernel.org/patch/125431/
Thanks
Xiaotian
>
> Thanks
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH -kvm] kvm: fix regression from rework KVM mmu_shrink() code
2010-08-24 2:31 ` [PATCH -kvm] kvm: fix regression from " Xiaotian Feng
@ 2010-08-24 13:12 ` Marcelo Tosatti
0 siblings, 0 replies; 11+ messages in thread
From: Marcelo Tosatti @ 2010-08-24 13:12 UTC (permalink / raw)
To: Xiaotian Feng; +Cc: kvm, linux-kernel, Dave Hansen, Tim Pepper
On Tue, Aug 24, 2010 at 10:31:07AM +0800, Xiaotian Feng wrote:
> Latest kvm mmu_shrink code rework makes kernel changes kvm->arch.n_used_mmu_pages/
> kvm->arch.n_max_mmu_pages at kvm_mmu_free_page/kvm_mmu_alloc_page, which is called
> by kvm_mmu_commit_zap_page. So the kvm->arch.n_used_mmu_pages or
> kvm_mmu_available_pages(vcpu->kvm) is unchanged after kvm_mmu_prepare_zap_page(),
> This caused kvm_mmu_change_mmu_pages/__kvm_mmu_free_some_pages loops forever.
> Moving kvm_mmu_commit_zap_page would make the while loop performs as normal.
>
> Reported-by: Avi Kivity <avi@redhat.com>
> Signed-off-by: Xiaotian Feng <dfeng@redhat.com>
> Tested-by: Avi Kivity <avi@redhat.com>
> Cc: Marcelo Tosatti <mtosatti@redhat.com>
> Cc: Dave Hansen <dave@linux.vnet.ibm.com>
> Cc: Tim Pepper <lnxninja@linux.vnet.ibm.com>
> ---
> arch/x86/kvm/mmu.c | 7 +++----
> 1 files changed, 3 insertions(+), 4 deletions(-)
Applied, thanks.
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2010-08-24 13:12 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-08-20 1:10 [PATCH 0/4 v2] kvm: rework KVM mmu_shrink() code Tim Pepper
2010-08-22 15:29 ` Avi Kivity
2010-08-23 10:22 ` Avi Kivity
2010-08-23 10:27 ` Avi Kivity
2010-08-23 11:11 ` Xiaotian Feng
2010-08-23 11:28 ` Avi Kivity
2010-08-23 19:53 ` Tim Pepper
2010-08-24 2:07 ` Marcelo Tosatti
2010-08-24 2:31 ` [PATCH -kvm] kvm: fix regression from " Xiaotian Feng
2010-08-24 13:12 ` Marcelo Tosatti
2010-08-24 9:50 ` [PATCH 0/4 v2] kvm: " Xiaotian Feng
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox