From: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
To: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>,
gleb@redhat.com, avi.kivity@gmail.com, pbonzini@redhat.com,
linux-kernel@vger.kernel.org, kvm@vger.kernel.org
Subject: Re: [PATCH v7 04/11] KVM: MMU: zap pages in batch
Date: Thu, 30 May 2013 00:03:44 +0800 [thread overview]
Message-ID: <51A626E0.9030308@linux.vnet.ibm.com> (raw)
In-Reply-To: <51A60A64.2080509@linux.vnet.ibm.com>
On 05/29/2013 10:02 PM, Xiao Guangrong wrote:
> On 05/29/2013 09:32 PM, Marcelo Tosatti wrote:
>> On Wed, May 29, 2013 at 09:09:09PM +0800, Xiao Guangrong wrote:
>>> This information is I replied Gleb in his mail where he raced a question that
>>> why "collapse tlb flush is needed":
>>>
>>> ======
>>> It seems no.
>>> Since we have reloaded mmu before zapping the obsolete pages, the mmu-lock
>>> is easily contended. I did the simple track:
>>>
>>> + int num = 0;
>>> restart:
>>> list_for_each_entry_safe_reverse(sp, node,
>>> &kvm->arch.active_mmu_pages, link) {
>>> @@ -4265,6 +4265,7 @@ restart:
>>> if (batch >= BATCH_ZAP_PAGES &&
>>> cond_resched_lock(&kvm->mmu_lock)) {
>>> batch = 0;
>>> + num++;
>>> goto restart;
>>> }
>>>
>>> @@ -4277,6 +4278,7 @@ restart:
>>> * may use the pages.
>>> */
>>> kvm_mmu_commit_zap_page(kvm, &invalid_list);
>>> + printk("lock-break: %d.\n", num);
>>> }
>>>
>>> I do read pci rom when doing kernel building in the guest which
>>> has 1G memory and 4vcpus with ept enabled, this is the normal
>>> workload and normal configuration.
>>>
>>> # dmesg
>>> [ 2338.759099] lock-break: 8.
>>> [ 2339.732442] lock-break: 5.
>>> [ 2340.904446] lock-break: 3.
>>> [ 2342.513514] lock-break: 3.
>>> [ 2343.452229] lock-break: 3.
>>> [ 2344.981599] lock-break: 4.
>>>
>>> Basically, we need to break many times.
>>
>> Should measure kvm_mmu_zap_all latency.
>>
>>> ======
>>>
>>> You can see we should break 3 times to zap all pages even if we have zapoed
>>> 10 pages in batch. It is obviously that it need break more times without
>>> batch-zapping.
>>
>> Again, breaking should be no problem, what matters is latency. Please
>> measure kvm_mmu_zap_all latency after all optimizations to justify
>> this minimum batching.
>
> Okay, okay. I will benchmark the latency.
Okay, I have done the test, the test environment is the same that
"I do read pci rom when doing kernel building in the guest which
has 1G memory and 4vcpus with ept enabled, this is the normal
workload and normal configuration.".
Batch-zapped:
Guest:
# cat /sys/bus/pci/devices/0000\:00\:03.0/rom
# free -m
total used free shared buffers cached
Mem: 975 793 181 0 6 438
-/+ buffers/cache: 347 627
Swap: 2015 43 1972
Host shows:
[ 2229.918558] lock-break: 5.
[ 2229.918564] kvm_mmu_invalidate_zap_all_pages: 174706e.
No-batch:
Guest:
# cat /sys/bus/pci/devices/0000\:00\:03.0/rom
# free -m
total used free shared buffers cached
Mem: 975 843 131 0 17 476
-/+ buffers/cache: 348 626
Swap: 2015 2
Host shows:
[ 2931.675285] lock-break: 13.
[ 2931.675291] kvm_mmu_invalidate_zap_all_pages: 69c1676.
That means, nearly the same memory accessed on guest:
- batch-zapped need to break 5 times, the latency is 174706e.
- no-batch need to break 13 times, the latency is 69c1676.
The code change to track the latency:
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 055d675..a66f21b 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -4233,13 +4233,13 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, int slot)
spin_unlock(&kvm->mmu_lock);
}
-#define BATCH_ZAP_PAGES 10
+#define BATCH_ZAP_PAGES 0
static void kvm_zap_obsolete_pages(struct kvm *kvm)
{
struct kvm_mmu_page *sp, *node;
LIST_HEAD(invalid_list);
int batch = 0;
-
+ int num = 0;
restart:
list_for_each_entry_safe_reverse(sp, node,
&kvm->arch.active_mmu_pages, link) {
@@ -4265,6 +4265,7 @@ restart:
if (batch >= BATCH_ZAP_PAGES &&
cond_resched_lock(&kvm->mmu_lock)) {
batch = 0;
+ num++;
goto restart;
}
@@ -4277,6 +4278,7 @@ restart:
* may use the pages.
*/
kvm_mmu_commit_zap_page(kvm, &invalid_list);
+ printk("lock-break: %d.\n", num);
}
/*
@@ -4290,7 +4292,12 @@ restart:
*/
void kvm_mmu_invalidate_zap_all_pages(struct kvm *kvm)
{
+ u64 start;
+
spin_lock(&kvm->mmu_lock);
+
+ start = local_clock();
+
trace_kvm_mmu_invalidate_zap_all_pages(kvm);
kvm->arch.mmu_valid_gen++;
@@ -4306,6 +4313,9 @@ void kvm_mmu_invalidate_zap_all_pages(struct kvm *kvm)
kvm_reload_remote_mmus(kvm);
kvm_zap_obsolete_pages(kvm);
+
+ printk("%s: %llx.\n", __FUNCTION__, local_clock() - start);
+
spin_unlock(&kvm->mmu_lock);
}
next prev parent reply other threads:[~2013-05-29 16:03 UTC|newest]
Thread overview: 60+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-05-22 19:55 [PATCH v7 00/11] KVM: MMU: fast zap all shadow pages Xiao Guangrong
2013-05-22 19:55 ` [PATCH v7 01/11] KVM: x86: drop calling kvm_mmu_zap_all in emulator_fix_hypercall Xiao Guangrong
2013-05-22 19:55 ` [PATCH v7 02/11] KVM: MMU: drop unnecessary kvm_reload_remote_mmus Xiao Guangrong
2013-05-22 19:55 ` [PATCH v7 03/11] KVM: MMU: fast invalidate all pages Xiao Guangrong
2013-05-24 20:23 ` Marcelo Tosatti
2013-05-26 8:26 ` Gleb Natapov
2013-05-26 20:37 ` Marcelo Tosatti
2013-05-27 22:59 ` Xiao Guangrong
2013-05-27 2:02 ` Xiao Guangrong
2013-05-22 19:55 ` [PATCH v7 04/11] KVM: MMU: zap pages in batch Xiao Guangrong
2013-05-24 20:34 ` Marcelo Tosatti
2013-05-27 2:20 ` Xiao Guangrong
2013-05-28 0:18 ` Marcelo Tosatti
2013-05-28 15:02 ` Xiao Guangrong
2013-05-29 11:11 ` Marcelo Tosatti
2013-05-29 13:09 ` Xiao Guangrong
2013-05-29 13:21 ` Marcelo Tosatti
2013-05-29 14:00 ` Xiao Guangrong
2013-05-29 13:32 ` Marcelo Tosatti
2013-05-29 14:02 ` Xiao Guangrong
2013-05-29 16:03 ` Xiao Guangrong [this message]
2013-05-22 19:55 ` [PATCH v7 05/11] KVM: x86: use the fast way to invalidate all pages Xiao Guangrong
2013-05-22 19:55 ` [PATCH v7 06/11] KVM: MMU: show mmu_valid_gen in shadow page related tracepoints Xiao Guangrong
2013-05-22 19:55 ` [PATCH v7 07/11] KVM: MMU: add tracepoint for kvm_mmu_invalidate_all_pages Xiao Guangrong
2013-05-22 19:55 ` [PATCH v7 08/11] KVM: MMU: do not reuse the obsolete page Xiao Guangrong
2013-05-22 19:55 ` [PATCH v7 09/11] KVM: MMU: introduce kvm_mmu_prepare_zap_obsolete_page Xiao Guangrong
2013-05-23 5:57 ` Gleb Natapov
2013-05-23 6:13 ` Xiao Guangrong
2013-05-23 6:18 ` Gleb Natapov
2013-05-23 6:31 ` Xiao Guangrong
2013-05-23 7:37 ` Gleb Natapov
2013-05-23 7:50 ` Xiao Guangrong
2013-05-23 8:09 ` Gleb Natapov
2013-05-23 8:33 ` Xiao Guangrong
2013-05-23 11:13 ` Xiao Guangrong
2013-05-23 12:39 ` Gleb Natapov
2013-05-23 13:03 ` Xiao Guangrong
2013-05-23 15:57 ` Gleb Natapov
2013-05-24 5:39 ` Xiao Guangrong
2013-05-24 5:53 ` Xiao Guangrong
2013-05-28 0:13 ` Marcelo Tosatti
2013-05-28 14:51 ` Xiao Guangrong
2013-05-29 12:25 ` Marcelo Tosatti
2013-05-29 13:43 ` Xiao Guangrong
2013-05-22 19:55 ` [PATCH v7 10/11] KVM: MMU: collapse TLB flushes when zap all pages Xiao Guangrong
2013-05-23 6:12 ` Gleb Natapov
2013-05-23 6:26 ` Xiao Guangrong
2013-05-23 7:24 ` Gleb Natapov
2013-05-23 7:37 ` Xiao Guangrong
2013-05-23 7:38 ` Xiao Guangrong
2013-05-23 7:56 ` Gleb Natapov
2013-05-28 0:36 ` Marcelo Tosatti
2013-05-28 15:19 ` Xiao Guangrong
2013-05-29 3:03 ` Xiao Guangrong
2013-05-29 12:39 ` Marcelo Tosatti
2013-05-29 13:19 ` Xiao Guangrong
2013-05-30 0:53 ` Gleb Natapov
2013-05-30 16:24 ` Takuya Yoshikawa
2013-05-30 17:10 ` Takuya Yoshikawa
2013-05-22 19:56 ` [PATCH v7 11/11] KVM: MMU: reduce KVM_REQ_MMU_RELOAD when root page is zapped Xiao Guangrong
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=51A626E0.9030308@linux.vnet.ibm.com \
--to=xiaoguangrong@linux.vnet.ibm.com \
--cc=avi.kivity@gmail.com \
--cc=gleb@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mtosatti@redhat.com \
--cc=pbonzini@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox