* [Qemu-devel] Qemu/KVM guest boots 2x slower with vhost_net
[not found] ` <20110929164021.GG19141@sequoia.sous-sol.org>
@ 2011-10-04 23:12 ` Reeted
2011-10-09 21:47 ` Reeted
0 siblings, 1 reply; 2+ messages in thread
From: Reeted @ 2011-10-04 23:12 UTC (permalink / raw)
To: kvm, libvir-list, qemu-devel; +Cc: Chris Wright, Richard W.M. Jones
Hello all,
for people in qemu-devel list, you might want to have a look at the
previous thread about this topic, at
http://www.spinics.net/lists/kvm/msg61537.html
but I will try to recap here.
I found that virtual machines in my host booted 2x slower (on average
it's 2x slower, but probably some parts are at least 3x slower) under
libvirt compared to manual qemu-kvm launch. With the help of Daniel I
narrowed it down to the vhost_net presence (default active when launched
by libvirt) i.e. with vhost_net, boot process is *UNIFORMLY* 2x slower.
The problem is still reproducible on my systems but these are going to
go to production soon and I am quite busy, I might not have many more
days for testing left. Might be just next saturday and sunday for
testing this problem, so if you can write here some of your suggestions
by saturday that would be most appreciated.
I have performed some benchmarks now, which I hadn't performed in the
old thread:
openssl speed -multi 2 rsa : (cpu benchmark) show no performance
difference with or without vhost_net
disk benchmarks : show no performance difference with or without vhost_net
the disk benchmarks were: (both with cache=none and cache=writeback)
dd streaming read
dd streaming write
fio 4k random read in all cases of cache=none, cache=writeback with host
cache dropped before test, cache=writeback with all fio data in host
cache (measures context switch)
fio 4k random write
So I couldn't reproduce the problem with any benchmark that came to my mind.
But in the boot process this is very visible.
I'll continue the description below
before that, here are the System Specifications:
---------------------------------------
Host is with kernel 3.0.3 and Qemu-KVM 0.14.1, both vanilla and compiled
by me.
Libvirt is the version in Ubuntu 11.04 Natty which is 0.8.8-1ubuntu6.5 .
I didn't recompile this one
VM disks are LVs of LVM on MD raid array.
The problem shows identically on both cache=none and cache=writeback.
Aio native.
Physical CPUs are: dual westmere 6-core (12 cores total, + hyperthreading)
2 vCPUs per VM.
All VMs are idle or off except the VM being tested.
Guests are:
- multiple Ubuntu 11.04 Natty 64bit with their 2.6.38-8-virtual kernel:
very-minimal Ubuntu installs with deboostrap (not from the Ubuntu installer)
- one Fedora Core 6 32bit with a 32bit 2.6.38-8-virtual kernel + initrd
both taken from Ubuntu Natty 32bit (so I could have virtio). Standard
install (except kernel replaced afterwards).
Always static IP address in all guests
---------------------------------------
All types of guests show this problem, but it is more visible in the FC6
guest because the boot process is MUCH longer than in the
debootstrap-installed ubuntus.
Please note that most of boot process, at least from a certain point
onwards, appears to the eye uniformly 2x or 3x slower under vhost_net,
and by boot process I mean, roughly, copying by hand from some screenshots:
Loading default keymap
Setting hostname
Setting up LVM - no volume groups found
checking ilesystems... clean ...
remounting root filesystem in read-write mode
mounting local filesystems
enabling local filesystems quotas
enabling /etc/fstab swaps
INIT entering runlevel 3
entering non-interactive startup
Starting sysstat: calling the system activity data collector (sadc)
Starting background readahead
********** starting from here it is everything, or almost everything,
much slower
Checking for hardware changes
Bringing up loopback interface
Bringing up interface eth0
starting system logger
starting kernel logger
starting irqbalance
starting potmap
starting nfs statd
starting rpc idmapd
starting system message bus
mounting other filesystems
starting PC/SC smart card daemon (pcscd)
starint hidd ... can't open HIDP control socket : address familiy not
supported by protocol (this is an error due to backporting a new ubuntu
kernel to FC6)
starting autofs: loading autofs4
starting automount
starting acpi daemon
starting hpiod
starting hpssd
starting cups
starting sshd
starting ntpd
starting sendmail
starting sm-client
startingg console mouse services
starting crond
starting xfs
starting anacron
starting atd
starting youm-updatesd
starting Avahi daemon
starting HAL daemon
From the point I marked, onwards, most are services, i.e. daemons
listening from sockets, so I have thought that maybe the binding to a
socket could have been slower under vhost_net, but trying to put nc in
listening with: "nc -l 15000" is instantaneous, so I am not sure.
The shutdown of FC6 with basically the same services as above which tear
down, is *also* much slower on vhost_net.
Thanks for any suggestions
R.
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [Qemu-devel] Qemu/KVM guest boots 2x slower with vhost_net
2011-10-04 23:12 ` [Qemu-devel] Qemu/KVM guest boots 2x slower with vhost_net Reeted
@ 2011-10-09 21:47 ` Reeted
0 siblings, 0 replies; 2+ messages in thread
From: Reeted @ 2011-10-09 21:47 UTC (permalink / raw)
To: kvm, libvir-list, qemu-devel; +Cc: Chris Wright, Richard W.M. Jones
On 10/05/11 01:12, Reeted wrote:
> .....
> I found that virtual machines in my host booted 2x slower ... to the
> vhost_net presence
> ...
Just a small update,
Firstly: I cannot reproduce any slowness after boot by doing:
# time /etc/init.d/chrony restart
Restarting time daemon: Starting /usr/sbin/chronyd...
chronyd is running and online.
real 0m3.022s
user 0m0.000s
sys 0m0.000s
since this is a network service I expected it to show the problem, but
it doesn't. It takes exactly same time with and without vhost_net.
Secondly, vhost_net appears to work correctly, because I have performed
a NPtcp performance test between two guests in the same host, and these
are the results:
vhost_net deactivated for both hosts:
NPtcp -h 192.168.7.81
Send and receive buffers are 16384 and 87380 bytes
(A bug in Linux doubles the requested buffer sizes)
Now starting the main loop
0: 1 bytes 917 times --> 0.08 Mbps in 92.07 usec
1: 2 bytes 1086 times --> 0.18 Mbps in 86.04 usec
2: 3 bytes 1162 times --> 0.27 Mbps in 85.08 usec
3: 4 bytes 783 times --> 0.36 Mbps in 85.34 usec
4: 6 bytes 878 times --> 0.54 Mbps in 85.42 usec
5: 8 bytes 585 times --> 0.72 Mbps in 85.31 usec
6: 12 bytes 732 times --> 1.07 Mbps in 85.52 usec
7: 13 bytes 487 times --> 1.16 Mbps in 85.52 usec
8: 16 bytes 539 times --> 1.43 Mbps in 85.26 usec
9: 19 bytes 659 times --> 1.70 Mbps in 85.43 usec
10: 21 bytes 739 times --> 1.77 Mbps in 90.71 usec
11: 24 bytes 734 times --> 2.13 Mbps in 86.13 usec
12: 27 bytes 822 times --> 2.22 Mbps in 92.80 usec
13: 29 bytes 478 times --> 2.35 Mbps in 94.02 usec
14: 32 bytes 513 times --> 2.60 Mbps in 93.75 usec
15: 35 bytes 566 times --> 3.15 Mbps in 84.77 usec
16: 45 bytes 674 times --> 4.01 Mbps in 85.56 usec
17: 48 bytes 779 times --> 4.32 Mbps in 84.70 usec
18: 51 bytes 811 times --> 4.61 Mbps in 84.32 usec
19: 61 bytes 465 times --> 5.08 Mbps in 91.57 usec
20: 64 bytes 537 times --> 5.22 Mbps in 93.46 usec
21: 67 bytes 551 times --> 5.73 Mbps in 89.20 usec
22: 93 bytes 602 times --> 8.28 Mbps in 85.73 usec
23: 96 bytes 777 times --> 8.45 Mbps in 86.70 usec
24: 99 bytes 780 times --> 8.71 Mbps in 86.72 usec
25: 125 bytes 419 times --> 11.06 Mbps in 86.25 usec
26: 128 bytes 575 times --> 11.38 Mbps in 85.80 usec
27: 131 bytes 591 times --> 11.60 Mbps in 86.17 usec
28: 189 bytes 602 times --> 16.55 Mbps in 87.14 usec
29: 192 bytes 765 times --> 16.80 Mbps in 87.19 usec
30: 195 bytes 770 times --> 17.11 Mbps in 86.94 usec
31: 253 bytes 401 times --> 22.04 Mbps in 87.59 usec
32: 256 bytes 568 times --> 22.64 Mbps in 86.25 usec
33: 259 bytes 584 times --> 22.68 Mbps in 87.12 usec
34: 381 bytes 585 times --> 33.19 Mbps in 87.58 usec
35: 384 bytes 761 times --> 33.54 Mbps in 87.36 usec
36: 387 bytes 766 times --> 33.91 Mbps in 87.08 usec
37: 509 bytes 391 times --> 44.23 Mbps in 87.80 usec
38: 512 bytes 568 times --> 44.70 Mbps in 87.39 usec
39: 515 bytes 574 times --> 45.21 Mbps in 86.90 usec
40: 765 bytes 580 times --> 66.05 Mbps in 88.36 usec
41: 768 bytes 754 times --> 66.73 Mbps in 87.81 usec
42: 771 bytes 760 times --> 67.02 Mbps in 87.77 usec
43: 1021 bytes 384 times --> 88.04 Mbps in 88.48 usec
44: 1024 bytes 564 times --> 88.30 Mbps in 88.48 usec
45: 1027 bytes 566 times --> 88.63 Mbps in 88.40 usec
46: 1533 bytes 568 times --> 71.75 Mbps in 163.00 usec
47: 1536 bytes 408 times --> 72.11 Mbps in 162.51 usec
48: 1539 bytes 410 times --> 71.71 Mbps in 163.75 usec
49: 2045 bytes 204 times --> 95.40 Mbps in 163.55 usec
50: 2048 bytes 305 times --> 95.26 Mbps in 164.02 usec
51: 2051 bytes 305 times --> 95.33 Mbps in 164.14 usec
52: 3069 bytes 305 times --> 141.16 Mbps in 165.87 usec
53: 3072 bytes 401 times --> 142.19 Mbps in 164.83 usec
54: 3075 bytes 404 times --> 150.68 Mbps in 155.70 usec
55: 4093 bytes 214 times --> 192.36 Mbps in 162.33 usec
56: 4096 bytes 307 times --> 193.21 Mbps in 161.74 usec
57: 4099 bytes 309 times --> 213.24 Mbps in 146.66 usec
58: 6141 bytes 341 times --> 330.80 Mbps in 141.63 usec
59: 6144 bytes 470 times --> 328.09 Mbps in 142.87 usec
60: 6147 bytes 466 times --> 330.53 Mbps in 141.89 usec
61: 8189 bytes 235 times --> 437.29 Mbps in 142.87 usec
62: 8192 bytes 349 times --> 436.23 Mbps in 143.27 usec
63: 8195 bytes 349 times --> 436.99 Mbps in 143.08 usec
64: 12285 bytes 349 times --> 625.88 Mbps in 149.75 usec
65: 12288 bytes 445 times --> 626.27 Mbps in 149.70 usec
66: 12291 bytes 445 times --> 626.15 Mbps in 149.76 usec
67: 16381 bytes 222 times --> 793.58 Mbps in 157.48 usec
68: 16384 bytes 317 times --> 806.90 Mbps in 154.91 usec
69: 16387 bytes 322 times --> 796.81 Mbps in 156.90 usec
70: 24573 bytes 318 times --> 1127.58 Mbps in 166.26 usec
71: 24576 bytes 400 times --> 1125.20 Mbps in 166.64 usec
72: 24579 bytes 400 times --> 1124.84 Mbps in 166.71 usec
73: 32765 bytes 200 times --> 1383.86 Mbps in 180.64 usec
74: 32768 bytes 276 times --> 1376.05 Mbps in 181.68 usec
75: 32771 bytes 275 times --> 1377.47 Mbps in 181.51 usec
76: 49149 bytes 275 times --> 1824.90 Mbps in 205.48 usec
77: 49152 bytes 324 times --> 1813.95 Mbps in 206.73 usec
78: 49155 bytes 322 times --> 1765.68 Mbps in 212.40 usec
79: 65533 bytes 156 times --> 2193.44 Mbps in 227.94 usec
80: 65536 bytes 219 times --> 2186.79 Mbps in 228.65 usec
81: 65539 bytes 218 times --> 2186.98 Mbps in 228.64 usec
82: 98301 bytes 218 times --> 2831.01 Mbps in 264.92 usec
83: 98304 bytes 251 times --> 2804.76 Mbps in 267.40 usec
84: 98307 bytes 249 times --> 2824.62 Mbps in 265.53 usec
85: 131069 bytes 125 times --> 3106.48 Mbps in 321.90 usec
86: 131072 bytes 155 times --> 3033.71 Mbps in 329.63 usec
87: 131075 bytes 151 times --> 3044.89 Mbps in 328.43 usec
88: 196605 bytes 152 times --> 4196.94 Mbps in 357.40 usec
89: 196608 bytes 186 times --> 4358.25 Mbps in 344.17 usec
90: 196611 bytes 193 times --> 4362.34 Mbps in 343.86 usec
91: 262141 bytes 96 times --> 4654.49 Mbps in 429.69 usec
92: 262144 bytes 116 times --> 4727.16 Mbps in 423.09 usec
93: 262147 bytes 118 times --> 4697.22 Mbps in 425.79 usec
94: 393213 bytes 117 times --> 5452.51 Mbps in 550.20 usec
95: 393216 bytes 121 times --> 5360.27 Mbps in 559.67 usec
96: 393219 bytes 119 times --> 5358.03 Mbps in 559.91 usec
97: 524285 bytes 59 times --> 5053.83 Mbps in 791.47 usec
98: 524288 bytes 63 times --> 5033.86 Mbps in 794.62 usec
99: 524291 bytes 62 times --> 5691.44 Mbps in 702.81 usec
100: 786429 bytes 71 times --> 5750.68 Mbps in 1043.35 usec
101: 786432 bytes 63 times --> 5809.21 Mbps in 1032.84 usec
102: 786435 bytes 64 times --> 5864.45 Mbps in 1023.12 usec
103: 1048573 bytes 32 times --> 5755.24 Mbps in 1390.03 usec
104: 1048576 bytes 35 times --> 6001.51 Mbps in 1333.00 usec
105: 1048579 bytes 37 times --> 6099.40 Mbps in 1311.61 usec
106: 1572861 bytes 38 times --> 6061.69 Mbps in 1979.64 usec
107: 1572864 bytes 33 times --> 6144.15 Mbps in 1953.08 usec
108: 1572867 bytes 34 times --> 6108.20 Mbps in 1964.58 usec
109: 2097149 bytes 16 times --> 6128.72 Mbps in 2610.65 usec
110: 2097152 bytes 19 times --> 6271.35 Mbps in 2551.29 usec
111: 2097155 bytes 19 times --> 6273.55 Mbps in 2550.39 usec
112: 3145725 bytes 19 times --> 6146.28 Mbps in 3904.79 usec
113: 3145728 bytes 17 times --> 6288.29 Mbps in 3816.62 usec
114: 3145731 bytes 17 times --> 6234.73 Mbps in 3849.41 usec
115: 4194301 bytes 8 times --> 5852.76 Mbps in 5467.50 usec
116: 4194304 bytes 9 times --> 5886.74 Mbps in 5435.94 usec
117: 4194307 bytes 9 times --> 5887.35 Mbps in 5435.39 usec
118: 6291453 bytes 9 times --> 4502.11 Mbps in 10661.67 usec
119: 6291456 bytes 6 times --> 4541.26 Mbps in 10569.75 usec
120: 6291459 bytes 6 times --> 4465.98 Mbps in 10747.93 usec
121: 8388605 bytes 3 times --> 4601.84 Mbps in 13907.47 usec
122: 8388608 bytes 3 times --> 4590.50 Mbps in 13941.84 usec
123: 8388611 bytes 3 times --> 4195.17 Mbps in 15255.65 usec
vhost_net activated for both hosts:
NPtcp -h 192.168.7.81
Send and receive buffers are 16384 and 87380 bytes
(A bug in Linux doubles the requested buffer sizes)
Now starting the main loop
0: 1 bytes 1013 times --> 0.10 Mbps in 75.89 usec
1: 2 bytes 1317 times --> 0.21 Mbps in 74.03 usec
2: 3 bytes 1350 times --> 0.30 Mbps in 76.90 usec
3: 4 bytes 866 times --> 0.43 Mbps in 71.27 usec
4: 6 bytes 1052 times --> 0.60 Mbps in 76.02 usec
5: 8 bytes 657 times --> 0.79 Mbps in 76.88 usec
6: 12 bytes 812 times --> 1.24 Mbps in 73.72 usec
7: 13 bytes 565 times --> 1.40 Mbps in 70.60 usec
8: 16 bytes 653 times --> 1.58 Mbps in 77.05 usec
9: 19 bytes 730 times --> 1.90 Mbps in 76.25 usec
10: 21 bytes 828 times --> 1.98 Mbps in 80.85 usec
11: 24 bytes 824 times --> 2.47 Mbps in 74.22 usec
12: 27 bytes 954 times --> 2.73 Mbps in 75.45 usec
13: 29 bytes 589 times --> 3.06 Mbps in 72.23 usec
14: 32 bytes 668 times --> 3.26 Mbps in 74.84 usec
15: 35 bytes 709 times --> 3.46 Mbps in 77.09 usec
16: 45 bytes 741 times --> 4.50 Mbps in 76.35 usec
17: 48 bytes 873 times --> 4.83 Mbps in 75.90 usec
18: 51 bytes 905 times --> 5.50 Mbps in 70.72 usec
19: 61 bytes 554 times --> 6.36 Mbps in 73.14 usec
20: 64 bytes 672 times --> 6.28 Mbps in 77.77 usec
21: 67 bytes 663 times --> 6.39 Mbps in 80.06 usec
22: 93 bytes 671 times --> 9.44 Mbps in 75.15 usec
23: 96 bytes 887 times --> 9.52 Mbps in 76.90 usec
24: 99 bytes 880 times --> 10.55 Mbps in 71.57 usec
25: 125 bytes 508 times --> 12.63 Mbps in 75.49 usec
26: 128 bytes 657 times --> 12.30 Mbps in 79.38 usec
27: 131 bytes 639 times --> 12.72 Mbps in 78.57 usec
28: 189 bytes 660 times --> 18.36 Mbps in 78.55 usec
29: 192 bytes 848 times --> 18.84 Mbps in 77.75 usec
30: 195 bytes 864 times --> 18.91 Mbps in 78.69 usec
31: 253 bytes 443 times --> 24.04 Mbps in 80.28 usec
32: 256 bytes 620 times --> 26.61 Mbps in 73.40 usec
33: 259 bytes 686 times --> 26.09 Mbps in 75.75 usec
34: 381 bytes 672 times --> 40.04 Mbps in 72.59 usec
35: 384 bytes 918 times --> 39.67 Mbps in 73.86 usec
36: 387 bytes 906 times --> 40.68 Mbps in 72.58 usec
37: 509 bytes 469 times --> 51.70 Mbps in 75.11 usec
38: 512 bytes 664 times --> 51.55 Mbps in 75.77 usec
39: 515 bytes 662 times --> 49.61 Mbps in 79.19 usec
40: 765 bytes 637 times --> 75.91 Mbps in 76.89 usec
41: 768 bytes 867 times --> 76.03 Mbps in 77.07 usec
42: 771 bytes 866 times --> 76.21 Mbps in 77.19 usec
43: 1021 bytes 436 times --> 99.46 Mbps in 78.32 usec
44: 1024 bytes 637 times --> 100.04 Mbps in 78.10 usec
45: 1027 bytes 641 times --> 100.06 Mbps in 78.31 usec
46: 1533 bytes 641 times --> 113.15 Mbps in 103.36 usec
47: 1536 bytes 644 times --> 127.72 Mbps in 91.75 usec
48: 1539 bytes 727 times --> 102.87 Mbps in 114.14 usec
49: 2045 bytes 293 times --> 177.68 Mbps in 87.81 usec
50: 2048 bytes 569 times --> 103.58 Mbps in 150.85 usec
51: 2051 bytes 331 times --> 107.53 Mbps in 145.52 usec
52: 3069 bytes 344 times --> 204.05 Mbps in 114.75 usec
53: 3072 bytes 580 times --> 207.53 Mbps in 112.93 usec
54: 3075 bytes 590 times --> 211.37 Mbps in 110.99 usec
55: 4093 bytes 301 times --> 285.23 Mbps in 109.48 usec
56: 4096 bytes 456 times --> 317.27 Mbps in 98.50 usec
57: 4099 bytes 507 times --> 332.92 Mbps in 93.93 usec
58: 6141 bytes 532 times --> 462.96 Mbps in 101.20 usec
59: 6144 bytes 658 times --> 451.75 Mbps in 103.76 usec
60: 6147 bytes 642 times --> 478.19 Mbps in 98.07 usec
61: 8189 bytes 340 times --> 743.81 Mbps in 84.00 usec
62: 8192 bytes 595 times --> 695.89 Mbps in 89.81 usec
63: 8195 bytes 556 times --> 702.95 Mbps in 88.94 usec
64: 12285 bytes 562 times --> 945.94 Mbps in 99.08 usec
65: 12288 bytes 672 times --> 870.86 Mbps in 107.65 usec
66: 12291 bytes 619 times --> 954.94 Mbps in 98.20 usec
67: 16381 bytes 339 times --> 1003.02 Mbps in 124.60 usec
68: 16384 bytes 401 times --> 652.84 Mbps in 191.47 usec
69: 16387 bytes 261 times --> 872.02 Mbps in 143.37 usec
70: 24573 bytes 348 times --> 1105.61 Mbps in 169.57 usec
71: 24576 bytes 393 times --> 1037.52 Mbps in 180.72 usec
72: 24579 bytes 368 times --> 1066.39 Mbps in 175.85 usec
73: 32765 bytes 189 times --> 1271.24 Mbps in 196.64 usec
74: 32768 bytes 254 times --> 1253.73 Mbps in 199.41 usec
75: 32771 bytes 250 times --> 1101.71 Mbps in 226.94 usec
76: 49149 bytes 220 times --> 1704.99 Mbps in 219.93 usec
77: 49152 bytes 303 times --> 1678.17 Mbps in 223.46 usec
78: 49155 bytes 298 times --> 1648.32 Mbps in 227.52 usec
79: 65533 bytes 146 times --> 1940.36 Mbps in 257.67 usec
80: 65536 bytes 194 times --> 1785.37 Mbps in 280.05 usec
81: 65539 bytes 178 times --> 2079.85 Mbps in 240.41 usec
82: 98301 bytes 207 times --> 2840.36 Mbps in 264.04 usec
83: 98304 bytes 252 times --> 3441.30 Mbps in 217.94 usec
84: 98307 bytes 305 times --> 3575.33 Mbps in 209.78 usec
85: 131069 bytes 158 times --> 3145.83 Mbps in 317.87 usec
86: 131072 bytes 157 times --> 3283.65 Mbps in 304.54 usec
87: 131075 bytes 164 times --> 3610.07 Mbps in 277.01 usec
88: 196605 bytes 180 times --> 4921.05 Mbps in 304.81 usec
89: 196608 bytes 218 times --> 4953.98 Mbps in 302.79 usec
90: 196611 bytes 220 times --> 4841.76 Mbps in 309.81 usec
91: 262141 bytes 107 times --> 4546.37 Mbps in 439.91 usec
92: 262144 bytes 113 times --> 4730.30 Mbps in 422.81 usec
93: 262147 bytes 118 times --> 5211.50 Mbps in 383.77 usec
94: 393213 bytes 130 times --> 7191.67 Mbps in 417.15 usec
95: 393216 bytes 159 times --> 7423.89 Mbps in 404.10 usec
96: 393219 bytes 164 times --> 7321.70 Mbps in 409.74 usec
97: 524285 bytes 81 times --> 7631.75 Mbps in 524.12 usec
98: 524288 bytes 95 times --> 7287.79 Mbps in 548.86 usec
99: 524291 bytes 91 times --> 7253.28 Mbps in 551.48 usec
100: 786429 bytes 90 times --> 8451.33 Mbps in 709.94 usec
101: 786432 bytes 93 times --> 8755.43 Mbps in 685.29 usec
102: 786435 bytes 97 times --> 8740.15 Mbps in 686.49 usec
103: 1048573 bytes 48 times --> 9220.97 Mbps in 867.59 usec
104: 1048576 bytes 57 times --> 8512.15 Mbps in 939.83 usec
105: 1048579 bytes 53 times --> 8556.70 Mbps in 934.94 usec
106: 1572861 bytes 53 times --> 9566.40 Mbps in 1254.39 usec
107: 1572864 bytes 53 times --> 10165.18 Mbps in 1180.50 usec
108: 1572867 bytes 56 times --> 11420.63 Mbps in 1050.73 usec
109: 2097149 bytes 31 times --> 11295.29 Mbps in 1416.52 usec
110: 2097152 bytes 35 times --> 11869.30 Mbps in 1348.02 usec
111: 2097155 bytes 37 times --> 11407.22 Mbps in 1402.62 usec
112: 3145725 bytes 35 times --> 12821.47 Mbps in 1871.86 usec
113: 3145728 bytes 35 times --> 11727.57 Mbps in 2046.46 usec
114: 3145731 bytes 32 times --> 12803.10 Mbps in 1874.55 usec
115: 4194301 bytes 17 times --> 10009.28 Mbps in 3197.03 usec
116: 4194304 bytes 15 times --> 10283.54 Mbps in 3111.77 usec
117: 4194307 bytes 16 times --> 10923.95 Mbps in 2929.34 usec
118: 6291453 bytes 17 times --> 11959.10 Mbps in 4013.68 usec
119: 6291456 bytes 16 times --> 10674.76 Mbps in 4496.59 usec
120: 6291459 bytes 14 times --> 10868.07 Mbps in 4416.61 usec
121: 8388605 bytes 7 times --> 9456.16 Mbps in 6768.07 usec
122: 8388608 bytes 7 times --> 9303.58 Mbps in 6879.07 usec
123: 8388611 bytes 7 times --> 10048.79 Mbps in 6368.93 usec
so the case vhost_net is indeed faster in NPtcp by a factor almost 2,
but that's only visible for the very high speeds and buffer sizes. Also
note that the speed in the vhost_net case is much more variable, even if
it's higher on average.
I expected more difference...
Actually I expected no-vhost to be slower than what it is.
Kudos to the developers.
If you have any more ideas for the slower boot please tell. I am of
course not worried about waiting 10-30 more seconds at boot time, I am
worried that if there is some factor 2x or 3x slowness somewhere, that
can bite me in production without me even realizing.
And after having seen the above no-vhost_net tcp benchmarks I guess I
don't really need vhost_net active for these VMs during production, so I
will just disable vhost_net to be on the safe side until I can track
down the boot-time slowness somehow.
Thanks for your help
R.
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2011-10-09 21:48 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <4E82118D.2010702@shiftmail.org>
[not found] ` <20110928075138.GC21102@redhat.com>
[not found] ` <4E82E6AF.5070009@shiftmail.org>
[not found] ` <20110928092859.GO21102@redhat.com>
[not found] ` <4E82ED8D.10004@shiftmail.org>
[not found] ` <20110929003923.GC19141@sequoia.sous-sol.org>
[not found] ` <4E844581.2000902@shiftmail.org>
[not found] ` <20110929164021.GG19141@sequoia.sous-sol.org>
2011-10-04 23:12 ` [Qemu-devel] Qemu/KVM guest boots 2x slower with vhost_net Reeted
2011-10-09 21:47 ` Reeted
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).