From mboxrd@z Thu Jan 1 00:00:00 1970 From: Dor Laor Subject: Re: bridge + KVM performance Date: Mon, 06 Jul 2009 14:53:15 +0300 Message-ID: <4A51E5AB.7070103@redhat.com> References: <1246872854.11177.1.camel@bl3aed4p.de.ibm.com> Reply-To: dlaor@redhat.com Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: kvm@vger.kernel.org, ralphw@linux.vnet.ibm.com To: Martin Petermann Return-path: Received: from mx2.redhat.com ([66.187.237.31]:60821 "EHLO mx2.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752683AbZGFLwx (ORCPT ); Mon, 6 Jul 2009 07:52:53 -0400 In-Reply-To: <1246872854.11177.1.camel@bl3aed4p.de.ibm.com> Sender: kvm-owner@vger.kernel.org List-ID: On 07/06/2009 12:34 PM, Martin Petermann wrote: > I'm currently looking at the network performance between two KVM guests > running on the same host. The host system is applied with two quad core > Xeons each 3GHz and 32G memory. 2G memory is assigned to the guests, > enough that swap is not used. I'm using RHEL 5.3 (2.6.18-128.1.10.el5) > on all the three systems: > > ____________________ ____________________ > | | | | > | KVM guest | | KVM guest | > | ic01vn08man | | ic01vn09man | > |____________________| |____________________| > \ / > \ / > \ / > \ / > \ / > ____\________/______ > | | > | KVM host | > |ethernet bridge: br3| > |____________________| > > > On the host I've created a network bridge in the following way > > [root@ic01in01man ~]# cat /etc/sysconfig/network-scripts/ifcfg-br3 > DEVICE=br3 > TYPE=Bridge > ONBOOT=yes > > and installed the bridge with the commands > > brctl addbr br3 > ifconfig br3 up > > Within the configuration files of the KVM guests I added the following > sections: > > ic01vn08man.xml; > ... > > > > > > ... > > ic01vn09man.xml > ... > > > > > > ... > > Within the guests I have configured the network in the following way: > > [root@ic01vn08man ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth3 > # Virtio Network Device > DEVICE=eth3 > BOOTPROTO=static > IPADDR=192.168.100.8 > NETMASK=255.255.255.0 > HWADDR=00:AD:BE:EF:99:08 > ONBOOT=yes > > [root@ic01vn09man ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth3 > # Virtio Network Device > DEVICE=eth3 > BOOTPROTO=static > IPADDR=192.168.100.9 > NETMASK=255.255.255.0 > HWADDR=00:AD:BE:EF:99:09 > ONBOOT=yes > > If I now test the network performance using the iperf tool > (http://sourceforge.net/projects/iperf/) > > performance between two guests (iperf server is running on other guest > ic01vn08man/192.168.100.8: ic01vn09man<-> ic01vn08man): > > [root@ic01vn09man ~]# nice -20 iperf -c 192.168.100.8 -t 60 -P 2 -l 2m > -w 131072 > ------------------------------------------------------------ > Client connecting to 192.168.100.8, TCP port 5001 > TCP window size: 256 KByte (WARNING: requested 128 KByte) > ------------------------------------------------------------ > [ 4] local 192.168.100.9 port 34171 connected with 192.168.100.8 port > 5001 > [ 3] local 192.168.100.9 port 34170 connected with 192.168.100.8 port > 5001 > [ 4] 0.0-60.1 sec 2.54 GBytes 363 Mbits/sec > [ 3] 0.0-60.1 sec 2.53 GBytes 361 Mbits/sec > [SUM] 0.0-60.1 sec 5.06 GBytes 724 Mbits/sec > > results within the same guest (iperf server is running on the same > system: ic01vn08man<-> ic01vn08man): > > [root@ic01vn08man ~]# nice -20 iperf -c 192.168.100.8 -t 60 -P 2 -l 2m > -w 131072 If you'll drop the -w 131072 you'll get over a 1G performance. Because of bad buffering config you get lots of idle time (check your cpu consumption). Using netperf is more recommended. You can check one of vmware's performance documents and check the huge difference of message size and socket sizes. > ------------------------------------------------------------ > Client connecting to 192.168.100.8, TCP port 5001 > TCP window size: 256 KByte (WARNING: requested 128 KByte) > ------------------------------------------------------------ > [ 4] local 192.168.100.8 port 55418 connected with 192.168.100.8 port > 5001 > [ 3] local 192.168.100.8 port 55417 connected with 192.168.100.8 port > 5001 > [ 3] 0.0-60.0 sec 46.2 GBytes 6.62 Gbits/sec > [ 4] 0.0-60.0 sec 45.2 GBytes 6.47 Gbits/sec > [SUM] 0.0-60.0 sec 91.4 GBytes 13.1 Gbits/sec > > 724 Mbits/sec is far away from what I have assumed. The host system is > connected with 10G ethernet and it would be necessary to have a similar > performance. > > Regards > Martin > > > -- > To unsubscribe from this list: send the line "unsubscribe kvm" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html