From mboxrd@z Thu Jan 1 00:00:00 1970 From: TooMeeK Subject: Possible to reach more than 1Gbit to VM? Date: Wed, 20 Jul 2011 17:00:22 +0200 Message-ID: <4E26ED86.7050002@o2.pl> References: <4E26BCC7.3070503@o2.pl> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit To: KVM list Return-path: Received: from moh1-ve1.go2.pl ([193.17.41.131]:56822 "EHLO moh1-ve1.go2.pl" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751727Ab1GTPA0 (ORCPT ); Wed, 20 Jul 2011 11:00:26 -0400 Received: from moh1-ve1.go2.pl (unknown [10.0.0.131]) by moh1-ve1.go2.pl (Postfix) with ESMTP id 4102791C08A for ; Wed, 20 Jul 2011 17:00:23 +0200 (CEST) Received: from unknown (unknown [10.0.0.142]) by moh1-ve1.go2.pl (Postfix) with SMTP for ; Wed, 20 Jul 2011 17:00:23 +0200 (CEST) In-Reply-To: <4E26BCC7.3070503@o2.pl> Sender: kvm-owner@vger.kernel.org List-ID: Hello, I've been playing around with KVM since few years. But now I'm wondering is it possible to mix bonding+bridging together to reach more than single Gigabit link between Client and VM? Looking over net, everyone says to use LACP.. but I did it already and it worked, but still at 1 NIC speed. This is my working setup on Debian Squeeze 64bit: *cat /proc/net/bonding/bond0* /Ethernet Channel Bonding Driver: v3.5.0 (November 4, 2008) Bonding Mode: load balancing (round-robin) MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 200 Down Delay (ms): 200 Slave Interface: eth1 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:11:22:33:44:55 Slave Interface: eth2 MII Status: up Link Failure Count: 0 Permanent HW addr: 55:44:33:22:11:00/ *cat /etc/network/interfaces* /auto lo iface lo inet loopback # The bonded network interface for LAN auto bond0 iface bond0 inet manual bond-slaves none bond-mode balance-rr bond-miimon 100 #bond_lacp_rate fast #bond_ad_select 0 arp_interval 80 up /sbin/ifenslave bond0 eth1 eth2 down /sbin/ifenslave bond0 -d eth1 eth2 #Onboard NIC #1 Nvidia Gigabit auto eth1 iface eth1 inet manual bond-master bond0 #NIC #2 Intel PRO/1000 F Server Adapter - FIBER auto eth2 iface eth2 inet manual bond-master bond0 # Bridge to LAN for virtual network KVM auto br0 iface br0 inet static address 10.0.0.250 netmask 255.255.255.0 network 10.0.0.0 broadcast 10.0.0.255 gateway 10.0.0.249 dns-nameservers 10.0.0.249 8.8.8.8 bridge-ports bond0 bridge-fd 9 bridge-hello 2 bridge-maxage 12 bridge-stp off #NIC #3 - modem auto eth0 iface eth0 inet manual #Bridge LAN to virtual network KVM - modem iface br1 inet manual bridge_ports eth0 bridge_stp off bridge_fd 0 bridge_maxwait 0 metric 1 auto br1/ *cat /etc/modprobe.d/bonding.conf* /alias bond0 bonding options bonding mode=balance-rr miimon=100 downdelay=200 updelay=200 arp_interval=80/ I've tried following already (single switch, not multiple): - LACP in Debian + LACP on the switch - static bond0 (round-robin) + static link aggregation on the switch for both Client and Hypervisor - tried several switches (HP V1910, 3Com 3824 and Planet GSD-802S) - tried several NICs, including Intel PRO/1000 F and MF fiber adapters - for example I can reach ~1,9Gbit between two non-virtualised servers using 3Com 3824 and NO link aggregation configured on the switch - I already reached almost native (940Mbit/s) from Client to VM using virtio-net and Debian. - tests using iperf, iSCSI, NFS. To avoid I/O limits - using ramdisks. Questions:. - is it even possible? - maybe I have to create MORE bridge interfaces, one per NIC and set up aggregated link inside VM then? - can bridge interface limit bandwidth to 1Gbit? Regards, Tom