public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* PV network performance comparison
@ 2007-10-11 21:30 James Dykman
       [not found] ` <OF0BAE27C6.6B75A1D6-ON85257371.00749F3B-85257371.007646DD-r/Jw6+rmf7HQT0dZR+AlfA@public.gmane.org>
  0 siblings, 1 reply; 5+ messages in thread
From: James Dykman @ 2007-10-11 21:30 UTC (permalink / raw)
  To: dor.laor-Re5JQEeQqe8AvxtiuMwx3w
  Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f

Dor,

I ran some netperf tests with your PV 
virtio drivers, along with some Xen PV cases
and a few others for comparison. I thought you
(and the list) might be interested in the numbers. 

I am going to start looking for bottlenecks, unless
you need help with the new hypercall updates.
I'll re-run when that is available.

Jim

Tests were run with Netperf-2.4.3, TCP Socket 
buffers were 256k. All of the tests were run with
netserver in the guest, netperf in the host/dom0.
No bridge was used.

Hardware: IBM HS21 blade 
        Dual Xeon w/HT @ 1.6GHz, 4GB

The host/Dom0 configuration:
kvm.*:
        Host is 32 bit Ubuntu 7.04 server running
        Dor's 2.6.22-rc3 kernel.
xen.*:
        Dom0 is 32 bit Ubuntu 7.04 server running
        the 2.6.18 kernel from xen3.1

The guest configurations:
All guests/domUs are 512MB, 1 CPU
kvm.rtl: (KVM with emulated RTL8029)
        Fedora 7 32 bit guest
        Standard 2.6.21-1.3194.fc7 kernel
kvm.pv: (KVM w/Dor's paravirt drivers)
        Fedora 7 32 bit guest running 
        Dor's 2.6.22-rc3 kernel. 
xen.pv: (Xen paravirt)
        Ubuntu 7.04 server w/2.6.18-xen kernel 
xen.um: (Xen HVM with unmodified drivers)
        Ubuntu 7.04 server w/2.6.18-xen kernel, 
        unmodified drivers compiled from xen3.1 
kvm.lo: (Host loopback)

TCP REQUEST/RESPONSE (Trans. Rate per sec)
size  kvm.rtl  kvm.pv    xen.pv    xen.um    kvm.lo
1     2191.47  9533.74  18052.37  13593.58  42400.73
64    2184.30  9518.13  17979.93  13557.98  42260.53
128   2177.52  9482.45  17940.08  13588.54  40983.90
256   2160.49  9465.97  17788.21  13492.42  41170.45
512   2130.99  9403.33  17655.11  13489.64  40765.26
1024  2074.85  9204.90  17293.06  13572.01  39437.78
2048   416.18  4750.41  12907.57  11571.07  37252.42
4096   265.22  3691.90  10990.67   9943.64  31905.03
8192   116.80  1892.25   8439.83   6604.64  24397.95
16384   92.06  1004.58   4535.86   3924.68  17460.30

TCP STREAM (Throughput 10^6bits/sec)
  size    kvm.rtl  kvm.pv   xen.pv   xen.um  kvm.lo
   2048    33.06   507.21   555.94  1442.38  5409.73
   4096    33.16   526.75   848.26  2359.42  6152.48
   8192    33.13   527.99   997.69  2418.87  7267.73
  16384    33.08   525.95  1107.64  2379.50  8434.29
  32768    33.13   525.38  1199.08  2375.81  8857.09
  65536    33.20   523.39  1255.33  2473.92  9248.35
 131072    33.11   520.87  1292.54  2605.49  8559.21

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: PV network performance comparison
       [not found] ` <OF0BAE27C6.6B75A1D6-ON85257371.00749F3B-85257371.007646DD-r/Jw6+rmf7HQT0dZR+AlfA@public.gmane.org>
@ 2007-10-11 23:12   ` Dor Laor
  2007-10-15  5:34   ` Zhao Forrest
  2007-10-15  8:51   ` Avi Kivity
  2 siblings, 0 replies; 5+ messages in thread
From: Dor Laor @ 2007-10-11 23:12 UTC (permalink / raw)
  To: James Dykman
  Cc: dor.laor-Re5JQEeQqe8AvxtiuMwx3w,
	kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f

James Dykman wrote:
> Dor,
>
> I ran some netperf tests with your PV 
> virtio drivers, along with some Xen PV cases
> and a few others for comparison. I thought you
> (and the list) might be interested in the numbers. 
>
>   
Thanks for the tests it indeed interesting.
Actually except for a small optimization (receiving several msgs from 
the tap and sending a single irq) I
haven't had the time to optimize the code. It's also interesting to 
check what lguest is doing since the
qemu path is not polished, also lguest has newer virtio drivers.
> I am going to start looking for bottlenecks, unless
> you need help with the new hypercall updates.
> I'll re-run when that is available.
>
>   
Any help would be great. I also need to move towards the latest virtio 
patch that includes
a change in the shared memory and pci like config space. I planned on 
doing this starting mid next week.

W.r.t performance the following can improve:
 - Avi's shorten latency tap patch
 - Using scatter gather in qemu tap
   That's why using bigger pkts don't help performance.
 - Minimize guest tx hypercalls
 - Running oprofile
 - Host side kernel driver.

Thanks,
Dor.
> Jim
>
> Tests were run with Netperf-2.4.3, TCP Socket 
> buffers were 256k. All of the tests were run with
> netserver in the guest, netperf in the host/dom0.
> No bridge was used.
>
> Hardware: IBM HS21 blade 
>         Dual Xeon w/HT @ 1.6GHz, 4GB
>
> The host/Dom0 configuration:
> kvm.*:
>         Host is 32 bit Ubuntu 7.04 server running
>         Dor's 2.6.22-rc3 kernel.
> xen.*:
>         Dom0 is 32 bit Ubuntu 7.04 server running
>         the 2.6.18 kernel from xen3.1
>
> The guest configurations:
> All guests/domUs are 512MB, 1 CPU
> kvm.rtl: (KVM with emulated RTL8029)
>         Fedora 7 32 bit guest
>         Standard 2.6.21-1.3194.fc7 kernel
> kvm.pv: (KVM w/Dor's paravirt drivers)
>         Fedora 7 32 bit guest running 
>         Dor's 2.6.22-rc3 kernel. 
> xen.pv: (Xen paravirt)
>         Ubuntu 7.04 server w/2.6.18-xen kernel 
> xen.um: (Xen HVM with unmodified drivers)
>         Ubuntu 7.04 server w/2.6.18-xen kernel, 
>         unmodified drivers compiled from xen3.1 
> kvm.lo: (Host loopback)
>
> TCP REQUEST/RESPONSE (Trans. Rate per sec)
> size  kvm.rtl  kvm.pv    xen.pv    xen.um    kvm.lo
> 1     2191.47  9533.74  18052.37  13593.58  42400.73
> 64    2184.30  9518.13  17979.93  13557.98  42260.53
> 128   2177.52  9482.45  17940.08  13588.54  40983.90
> 256   2160.49  9465.97  17788.21  13492.42  41170.45
> 512   2130.99  9403.33  17655.11  13489.64  40765.26
> 1024  2074.85  9204.90  17293.06  13572.01  39437.78
> 2048   416.18  4750.41  12907.57  11571.07  37252.42
> 4096   265.22  3691.90  10990.67   9943.64  31905.03
> 8192   116.80  1892.25   8439.83   6604.64  24397.95
> 16384   92.06  1004.58   4535.86   3924.68  17460.30
>
> TCP STREAM (Throughput 10^6bits/sec)
>   size    kvm.rtl  kvm.pv   xen.pv   xen.um  kvm.lo
>    2048    33.06   507.21   555.94  1442.38  5409.73
>    4096    33.16   526.75   848.26  2359.42  6152.48
>    8192    33.13   527.99   997.69  2418.87  7267.73
>   16384    33.08   525.95  1107.64  2379.50  8434.29
>   32768    33.13   525.38  1199.08  2375.81  8857.09
>   65536    33.20   523.39  1255.33  2473.92  9248.35
>  131072    33.11   520.87  1292.54  2605.49  8559.21
>
> -------------------------------------------------------------------------
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >> http://get.splunk.com/
> _______________________________________________
> kvm-devel mailing list
> kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
> https://lists.sourceforge.net/lists/listinfo/kvm-devel
>
>   


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: PV network performance comparison
       [not found] ` <OF0BAE27C6.6B75A1D6-ON85257371.00749F3B-85257371.007646DD-r/Jw6+rmf7HQT0dZR+AlfA@public.gmane.org>
  2007-10-11 23:12   ` Dor Laor
@ 2007-10-15  5:34   ` Zhao Forrest
       [not found]     ` <ac8af0be0710142234q19155a7aia00adf12d0c6e62a-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  2007-10-15  8:51   ` Avi Kivity
  2 siblings, 1 reply; 5+ messages in thread
From: Zhao Forrest @ 2007-10-15  5:34 UTC (permalink / raw)
  To: James Dykman
  Cc: dor.laor-Re5JQEeQqe8AvxtiuMwx3w,
	kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f

On 10/12/07, James Dykman <dykman-r/Jw6+rmf7HQT0dZR+AlfA@public.gmane.org> wrote:
> Dor,
>
> I ran some netperf tests with your PV
> virtio drivers, along with some Xen PV cases
> and a few others for comparison. I thought you
> (and the list) might be interested in the numbers.
>
> I am going to start looking for bottlenecks, unless
> you need help with the new hypercall updates.
> I'll re-run when that is available.
>
> Jim
>
> Tests were run with Netperf-2.4.3, TCP Socket
> buffers were 256k. All of the tests were run with
> netserver in the guest, netperf in the host/dom0.
> No bridge was used.
>
> Hardware: IBM HS21 blade
>         Dual Xeon w/HT @ 1.6GHz, 4GB
>
> The host/Dom0 configuration:
> kvm.*:
>         Host is 32 bit Ubuntu 7.04 server running
>         Dor's 2.6.22-rc3 kernel.
> xen.*:
>         Dom0 is 32 bit Ubuntu 7.04 server running
>         the 2.6.18 kernel from xen3.1
>
> The guest configurations:
> All guests/domUs are 512MB, 1 CPU
> kvm.rtl: (KVM with emulated RTL8029)
>         Fedora 7 32 bit guest
>         Standard 2.6.21-1.3194.fc7 kernel
> kvm.pv: (KVM w/Dor's paravirt drivers)
>         Fedora 7 32 bit guest running
>         Dor's 2.6.22-rc3 kernel.
> xen.pv: (Xen paravirt)
>         Ubuntu 7.04 server w/2.6.18-xen kernel
> xen.um: (Xen HVM with unmodified drivers)
>         Ubuntu 7.04 server w/2.6.18-xen kernel,
>         unmodified drivers compiled from xen3.1
> kvm.lo: (Host loopback)
>
> TCP REQUEST/RESPONSE (Trans. Rate per sec)
> size  kvm.rtl  kvm.pv    xen.pv    xen.um    kvm.lo
> 1     2191.47  9533.74  18052.37  13593.58  42400.73
> 64    2184.30  9518.13  17979.93  13557.98  42260.53
> 128   2177.52  9482.45  17940.08  13588.54  40983.90
> 256   2160.49  9465.97  17788.21  13492.42  41170.45
> 512   2130.99  9403.33  17655.11  13489.64  40765.26
> 1024  2074.85  9204.90  17293.06  13572.01  39437.78
> 2048   416.18  4750.41  12907.57  11571.07  37252.42
> 4096   265.22  3691.90  10990.67   9943.64  31905.03
> 8192   116.80  1892.25   8439.83   6604.64  24397.95
> 16384   92.06  1004.58   4535.86   3924.68  17460.30
>
> TCP STREAM (Throughput 10^6bits/sec)
>   size    kvm.rtl  kvm.pv   xen.pv   xen.um  kvm.lo
>    2048    33.06   507.21   555.94  1442.38  5409.73
>    4096    33.16   526.75   848.26  2359.42  6152.48
>    8192    33.13   527.99   997.69  2418.87  7267.73
>   16384    33.08   525.95  1107.64  2379.50  8434.29
>   32768    33.13   525.38  1199.08  2375.81  8857.09
>   65536    33.20   523.39  1255.33  2473.92  9248.35
>  131072    33.11   520.87  1292.54  2605.49  8559.21
>
When running KVM(kvm.rtl) and xen-HVM(xen.um) on the same machine, I
feel that the guest OS on top of KVM is much faster responsive than
the one on top of xen-HVM.
But this test result showed that xen-HVM is more responsive than KVM.
Weird. I onced tried KVM-36 and xen-3.0.1 and got such impression.

Thanks,
Forrest

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: PV network performance comparison
       [not found] ` <OF0BAE27C6.6B75A1D6-ON85257371.00749F3B-85257371.007646DD-r/Jw6+rmf7HQT0dZR+AlfA@public.gmane.org>
  2007-10-11 23:12   ` Dor Laor
  2007-10-15  5:34   ` Zhao Forrest
@ 2007-10-15  8:51   ` Avi Kivity
  2 siblings, 0 replies; 5+ messages in thread
From: Avi Kivity @ 2007-10-15  8:51 UTC (permalink / raw)
  To: James Dykman
  Cc: dor.laor-Re5JQEeQqe8AvxtiuMwx3w,
	kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f

James Dykman wrote:
> Dor,
>
> I ran some netperf tests with your PV 
> virtio drivers, along with some Xen PV cases
> and a few others for comparison. I thought you
> (and the list) might be interested in the numbers. 
>
> I am going to start looking for bottlenecks, unless
> you need help with the new hypercall updates.
> I'll re-run when that is available.
>
> Jim
>
> TCP REQUEST/RESPONSE (Trans. Rate per sec)
> size  kvm.rtl  kvm.pv    xen.pv    xen.um    kvm.lo
> 1     2191.47  9533.74  18052.37  13593.58  42400.73
> 64    2184.30  9518.13  17979.93  13557.98  42260.53
> 128   2177.52  9482.45  17940.08  13588.54  40983.90
> 256   2160.49  9465.97  17788.21  13492.42  41170.45
> 512   2130.99  9403.33  17655.11  13489.64  40765.26
> 1024  2074.85  9204.90  17293.06  13572.01  39437.78
> 2048   416.18  4750.41  12907.57  11571.07  37252.42
> 4096   265.22  3691.90  10990.67   9943.64  31905.03
> 8192   116.80  1892.25   8439.83   6604.64  24397.95
> 16384   92.06  1004.58   4535.86   3924.68  17460.30
>
>   

A flood ping from guest to host gives almost 10000 transmissions/sec 
with the rtl8139 with an FC6 x86_64 guest here.


> TCP STREAM (Throughput 10^6bits/sec)
>   size    kvm.rtl  kvm.pv   xen.pv   xen.um  kvm.lo
>    2048    33.06   507.21   555.94  1442.38  5409.73
>    4096    33.16   526.75   848.26  2359.42  6152.48
>    8192    33.13   527.99   997.69  2418.87  7267.73
>   16384    33.08   525.95  1107.64  2379.50  8434.29
>   32768    33.13   525.38  1199.08  2375.81  8857.09
>   65536    33.20   523.39  1255.33  2473.92  9248.35
>  131072    33.11   520.87  1292.54  2605.49  8559.21
>   

xen.um is higher than xen.pv?

I get 8.4 MB/s (67Mb/s) using netcat from guest to host.


-- 
error compiling committee.c: too many arguments to function


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: PV network performance comparison
       [not found]     ` <ac8af0be0710142234q19155a7aia00adf12d0c6e62a-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2007-10-15  8:58       ` Avi Kivity
  0 siblings, 0 replies; 5+ messages in thread
From: Avi Kivity @ 2007-10-15  8:58 UTC (permalink / raw)
  To: Zhao Forrest
  Cc: James Dykman, dor.laor-Re5JQEeQqe8AvxtiuMwx3w,
	kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f

Zhao Forrest wrote:
>
> When running KVM(kvm.rtl) and xen-HVM(xen.um) on the same machine, I
> feel that the guest OS on top of KVM is much faster responsive than
> the one on top of xen-HVM.
> But this test result showed that xen-HVM is more responsive than KVM.
> Weird. I onced tried KVM-36 and xen-3.0.1 and got such impression.
>   

KVM certainly has an edge in latency because there are fewer layers and 
schedulers involved.  Regarding throughput, the numbers for kvm.rtl look 
lower than expected while xen.um's numbers are unrealistically high.  
The test needs to be done more carefully (using a recent kvm, too).

-- 
error compiling committee.c: too many arguments to function


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2007-10-15  8:58 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-10-11 21:30 PV network performance comparison James Dykman
     [not found] ` <OF0BAE27C6.6B75A1D6-ON85257371.00749F3B-85257371.007646DD-r/Jw6+rmf7HQT0dZR+AlfA@public.gmane.org>
2007-10-11 23:12   ` Dor Laor
2007-10-15  5:34   ` Zhao Forrest
     [not found]     ` <ac8af0be0710142234q19155a7aia00adf12d0c6e62a-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2007-10-15  8:58       ` Avi Kivity
2007-10-15  8:51   ` Avi Kivity

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox