From: Sasha Levin <levinsasha928@gmail.com>
To: Avi Kivity <avi@redhat.com>
Cc: mtosatti@redhat.com, gregkh@linuxfoundation.org,
sjenning@linux.vnet.ibm.com, dan.magenheimer@oracle.com,
konrad.wilk@oracle.com, kvm@vger.kernel.org
Subject: Re: [RFC 00/10] KVM: Add TMEM host/guest support
Date: Fri, 08 Jun 2012 15:20:41 +0200 [thread overview]
Message-ID: <1339161641.3200.15.camel@lappy> (raw)
In-Reply-To: <4FCF5A08.7080306@redhat.com>
I re-ran benchmarks in a single user environment to get more stable results, increasing the test files to 50gb each.
First, a test of the good case scenario for KVM TMEM - we'll try streaming a file which compresses well but is bigger than the host RAM size:
First, no KVM TMEM, caching=none:
sh-4.2# time dd if=test/zero of=/dev/null bs=4M count=2048
2048+0 records in
2048+0 records out
8589934592 bytes (8.6 GB) copied, 116.309 s, 73.9 MB/s
real 1m56.349s
user 0m0.015s
sys 0m15.671s
sh-4.2# time dd if=test/zero of=/dev/null bs=4M count=2048
2048+0 records in
2048+0 records out
8589934592 bytes (8.6 GB) copied, 116.191 s, 73.9 MB/s
real 1m56.255s
user 0m0.018s
sys 0m15.504s
Now, no KVM TMEM, caching=writeback:
sh-4.2# time dd if=test/zero of=/dev/null bs=4M count=2048
2048+0 records in
2048+0 records out
8589934592 bytes (8.6 GB) copied, 122.894 s, 69.9 MB/s
real 2m2.965s
user 0m0.015s
sys 0m11.025s
sh-4.2# time dd if=test/zero of=/dev/null bs=4M count=2048
2048+0 records in
2048+0 records out
8589934592 bytes (8.6 GB) copied, 110.915 s, 77.4 MB/s
real 1m50.968s
user 0m0.011s
sys 0m10.108s
And finally, KVM TMEM on, caching=none:
sh-4.2# time dd if=test/zero of=/dev/null bs=4M count=2048
2048+0 records in
2048+0 records out
8589934592 bytes (8.6 GB) copied, 119.024 s, 72.2 MB/s
real 1m59.123s
user 0m0.020s
sys 0m29.336s
sh-4.2# time dd if=test/zero of=/dev/null bs=4M count=2048
2048+0 records in
2048+0 records out
8589934592 bytes (8.6 GB) copied, 36.8798 s, 233 MB/s
real 0m36.950s
user 0m0.005s
sys 0m35.308s
This is a snapshot of kvm_stats while the 2nd run in the KVM TMEM test was going:
kvm statistics
kvm_exit 1952342 36037
kvm_entry 1952334 36034
kvm_hypercall 1710568 33948
kvm_apic 109027 1319
kvm_emulate_insn 63745 673
kvm_mmio 63483 669
kvm_inj_virq 45899 654
kvm_apic_accept_irq 45809 654
kvm_pio 18445 52
kvm_set_irq 19102 50
kvm_msi_set_irq 17809 47
kvm_fpu 244 18
kvm_apic_ipi 368 8
kvm_cr 70 6
kvm_userspace_exit 897 5
kvm_cpuid 48 5
vcpu_match_mmio 257 3
kvm_pic_set_irq 1293 3
kvm_ioapic_set_irq 1293 3
kvm_ack_irq 84 1
kvm_page_fault 60538 0
Now, for the worst case "streaming test". I've tried streaming two files, one which has good compression (zeros), and one full with random bits. Doing two runs for each.
First, the baseline - no KVM TMEM, caching=none:
Zero file:
12800+0 records in
12800+0 records out
53687091200 bytes (54 GB) copied, 703.502 s, 76.3 MB/s
real 11m43.583s
user 0m0.106s
sys 1m42.075s
12800+0 records in
12800+0 records out
53687091200 bytes (54 GB) copied, 691.208 s, 77.7 MB/s
real 11m31.284s
user 0m0.100s
sys 1m41.235s
Random file:
12594+1 records in
12594+1 records out
52824875008 bytes (53 GB) copied, 655.778 s, 80.6 MB/s
real 10m55.847s
user 0m0.107s
sys 1m39.852s
12594+1 records in
12594+1 records out
52824875008 bytes (53 GB) copied, 652.668 s, 80.9 MB/s
real 10m52.739s
user 0m0.120s
sys 1m39.712s
Now, this is with zcache enabled in the guest (not going through KVM TMEM), caching=none:
Zeros:
12800+0 records in
12800+0 records out
53687091200 bytes (54 GB) copied, 704.479 s, 76.2 MB/s
real 11m44.536s
user 0m0.088s
sys 2m0.639s
12800+0 records in
12800+0 records out
53687091200 bytes (54 GB) copied, 690.501 s, 77.8 MB/s
real 11m30.561s
user 0m0.088s
sys 1m57.637s
Random:
12594+1 records in
12594+1 records out
52824875008 bytes (53 GB) copied, 656.436 s, 80.5 MB/s
real 10m56.480s
user 0m0.034s
sys 3m18.750s
12594+1 records in
12594+1 records out
52824875008 bytes (53 GB) copied, 658.446 s, 80.2 MB/s
real 10m58.499s
user 0m0.046s
sys 3m23.678s
Next, with KVM TMEM enabled, caching=none:
Zeros:
12800+0 records in
12800+0 records out
53687091200 bytes (54 GB) copied, 711.754 s, 75.4 MB/s
real 11m51.916s
user 0m0.081s
sys 2m59.952s
12800+0 records in
12800+0 records out
53687091200 bytes (54 GB) copied, 690.958 s, 77.7 MB/s
real 11m31.102s
user 0m0.082s
sys 3m6.500s
Random:
12594+1 records in
12594+1 records out
52824875008 bytes (53 GB) copied, 656.378 s, 80.5 MB/s
real 10m56.445s
user 0m0.062s
sys 5m53.236s
12594+1 records in
12594+1 records out
52824875008 bytes (53 GB) copied, 653.353 s, 80.9 MB/s
real 10m53.404s
user 0m0.066s
sys 5m57.087s
This is a snapshot of kvm_stats while this test was running:
kvm statistics
kvm_entry 168179 20729
kvm_exit 168179 20728
kvm_hypercall 131808 16409
kvm_apic 17305 2006
kvm_mmio 10877 1259
kvm_emulate_insn 10974 1258
kvm_page_fault 6270 866
kvm_inj_virq 6532 751
kvm_apic_accept_irq 6516 751
kvm_set_irq 4888 536
kvm_msi_set_irq 4471 536
kvm_pio 4714 529
kvm_userspace_exit 300 2
vcpu_match_mmio 83 2
kvm_apic_ipi 69 2
kvm_pic_set_irq 417 0
kvm_ioapic_set_irq 417 0
kvm_fpu 76 0
kvm_ack_irq 27 0
kvm_cr 24 0
kvm_cpuid 16 0
And finally, KVM TMEM enabled, with caching=writeback:
Zeros:
12800+0 records in
12800+0 records out
53687091200 bytes (54 GB) copied, 710.62 s, 75.5 MB/s
real 11m50.698s
user 0m0.078s
sys 3m29.920s
12800+0 records in
12800+0 records out
53687091200 bytes (54 GB) copied, 686.286 s, 78.2 MB/s
real 11m26.321s
user 0m0.088s
sys 3m25.931s
Random:
12594+1 records in
12594+1 records out
52824875008 bytes (53 GB) copied, 673.831 s, 78.4 MB/s
real 11m13.883s
user 0m0.047s
sys 4m5.569s
12594+1 records in
12594+1 records out
52824875008 bytes (53 GB) copied, 673.594 s, 78.4 MB/s
real 11m13.619s
user 0m0.056s
sys 4m12.134s
next prev parent reply other threads:[~2012-06-08 13:19 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-06-06 13:07 [RFC 00/10] KVM: Add TMEM host/guest support Sasha Levin
2012-06-06 13:24 ` Avi Kivity
2012-06-08 13:20 ` Sasha Levin [this message]
2012-06-08 16:06 ` Dan Magenheimer
2012-06-11 11:17 ` Avi Kivity
2012-06-11 8:09 ` Avi Kivity
2012-06-11 10:26 ` Sasha Levin
2012-06-11 11:45 ` Avi Kivity
2012-06-11 15:44 ` Dan Magenheimer
2012-06-11 17:06 ` Avi Kivity
2012-06-11 19:25 ` Sasha Levin
2012-06-11 19:56 ` Sasha Levin
2012-06-12 11:46 ` Avi Kivity
2012-06-12 11:58 ` Gleb Natapov
2012-06-12 12:01 ` Avi Kivity
2012-06-12 10:12 ` Avi Kivity
2012-06-12 1:18 ` Dan Magenheimer
2012-06-12 10:09 ` Avi Kivity
2012-06-12 16:40 ` Dan Magenheimer
2012-06-12 17:54 ` Avi Kivity
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1339161641.3200.15.camel@lappy \
--to=levinsasha928@gmail.com \
--cc=avi@redhat.com \
--cc=dan.magenheimer@oracle.com \
--cc=gregkh@linuxfoundation.org \
--cc=konrad.wilk@oracle.com \
--cc=kvm@vger.kernel.org \
--cc=mtosatti@redhat.com \
--cc=sjenning@linux.vnet.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox