* Possible to reach more than 1Gbit to VM?
[not found] <4E26BCC7.3070503@o2.pl>
@ 2011-07-20 15:00 ` TooMeeK
[not found] ` <CAOjFWZ57--PffFXQdbYNfBXfnZoBWZaDq9gUM7=w8Ycw3JcDuw@mail.gmail.com>
0 siblings, 1 reply; 3+ messages in thread
From: TooMeeK @ 2011-07-20 15:00 UTC (permalink / raw)
To: KVM list
Hello,
I've been playing around with KVM since few years.
But now I'm wondering is it possible to mix bonding+bridging together to
reach more than single Gigabit link between Client and VM?
Looking over net, everyone says to use LACP.. but I did it already and
it worked, but still at 1 NIC speed.
This is my working setup on Debian Squeeze 64bit:
*cat /proc/net/bonding/bond0*
/Ethernet Channel Bonding Driver: v3.5.0 (November 4, 2008)
Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 200
Down Delay (ms): 200
Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:11:22:33:44:55
Slave Interface: eth2
MII Status: up
Link Failure Count: 0
Permanent HW addr: 55:44:33:22:11:00/
*cat /etc/network/interfaces*
/auto lo
iface lo inet loopback
# The bonded network interface for LAN
auto bond0
iface bond0 inet manual
bond-slaves none
bond-mode balance-rr
bond-miimon 100
#bond_lacp_rate fast
#bond_ad_select 0
arp_interval 80
up /sbin/ifenslave bond0 eth1 eth2
down /sbin/ifenslave bond0 -d eth1 eth2
#Onboard NIC #1 Nvidia Gigabit
auto eth1
iface eth1 inet manual
bond-master bond0
#NIC #2 Intel PRO/1000 F Server Adapter - FIBER
auto eth2
iface eth2 inet manual
bond-master bond0
# Bridge to LAN for virtual network KVM
auto br0
iface br0 inet static
address 10.0.0.250
netmask 255.255.255.0
network 10.0.0.0
broadcast 10.0.0.255
gateway 10.0.0.249
dns-nameservers 10.0.0.249 8.8.8.8
bridge-ports bond0
bridge-fd 9
bridge-hello 2
bridge-maxage 12
bridge-stp off
#NIC #3 - modem
auto eth0
iface eth0 inet manual
#Bridge LAN to virtual network KVM - modem
iface br1 inet manual
bridge_ports eth0
bridge_stp off
bridge_fd 0
bridge_maxwait 0
metric 1
auto br1/
*cat /etc/modprobe.d/bonding.conf*
/alias bond0 bonding
options bonding mode=balance-rr miimon=100 downdelay=200 updelay=200
arp_interval=80/
I've tried following already (single switch, not multiple):
- LACP in Debian + LACP on the switch
- static bond0 (round-robin) + static link aggregation on the switch for
both Client and Hypervisor
- tried several switches (HP V1910, 3Com 3824 and Planet GSD-802S)
- tried several NICs, including Intel PRO/1000 F and MF fiber adapters
- for example I can reach ~1,9Gbit between two non-virtualised servers
using 3Com 3824 and NO link aggregation configured on the switch
- I already reached almost native (940Mbit/s) from Client to VM using
virtio-net and Debian.
- tests using iperf, iSCSI, NFS. To avoid I/O limits - using ramdisks.
Questions:.
- is it even possible?
- maybe I have to create MORE bridge interfaces, one per NIC and set up
aggregated link inside VM then?
- can bridge interface limit bandwidth to 1Gbit?
Regards,
Tom
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: Possible to reach more than 1Gbit to VM?
[not found] ` <CAOjFWZ57--PffFXQdbYNfBXfnZoBWZaDq9gUM7=w8Ycw3JcDuw@mail.gmail.com>
@ 2011-07-20 15:56 ` TooMeeK
0 siblings, 0 replies; 3+ messages in thread
From: TooMeeK @ 2011-07-20 15:56 UTC (permalink / raw)
Cc: KVM list
Single connection can be at double speed, checked using iperf, nuttp.
So bonding interfaces in VM without bonding interfaces in Hypervisor
will not work too?
Round-robin policy provides load-balancing and failover, all NICs work
together I can see this from statistics:
LAB SERVER
cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)
Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 200
Down Delay (ms): 200
Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:1f:1f:fa:3f:a9
Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:24:1d:66:b7:9a
ifconfig bond0
bond0 Link encap:Ethernet HWaddr 00:1f:1f:fa:3f:a9
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:62560993 errors:0 dropped:0 overruns:0 frame:0
TX packets:34620931 errors:0 dropped:92 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:64688731563 (64.6 GB) TX bytes:15820286443 (15.8 GB)
ifconfig eth0
eth0 Link encap:Ethernet HWaddr 00:1f:1f:fa:3f:a9
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:48828189 errors:0 dropped:0 overruns:0 frame:0
TX packets:17310186 errors:0 dropped:92 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:49519993668 (49.5 GB) TX bytes:7910144215 (7.9 GB)
Interrupt:44 Base address:0x4000
ifconfig eth1
eth1 Link encap:Ethernet HWaddr 00:1f:1f:fa:3f:a9
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:13733216 errors:0 dropped:0 overruns:0 frame:0
TX packets:17310956 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:15169147719 (15.1 GB) TX bytes:7910155539 (7.9 GB)
Interrupt:43 Base address:0xa000
HOME SERVER
cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.5.0 (November 4, 2008)
Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 200
Down Delay (ms): 200
Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:25:22:8a:7a:ef
Slave Interface: eth2
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:03:47:b1:e3:41
/sbin/ifconfig bond0
bond0 Link encap:Ethernet HWaddr 00:25:22:8a:7a:ef
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:5401406 errors:0 dropped:0 overruns:0 frame:0
TX packets:8713650 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:404938085 (386.1 MiB) TX bytes:12497904912 (11.6 GiB)
/sbin/ifconfig eth1
eth1 Link encap:Ethernet HWaddr 00:25:22:8a:7a:ef
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:347896 errors:0 dropped:0 overruns:0 frame:0
TX packets:4356784 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:21551680 (20.5 MiB) TX bytes:6257879091 (5.8 GiB)
Interrupt:27 Base address:0x8000
/sbin/ifconfig eth2
eth2 Link encap:Ethernet HWaddr 00:25:22:8a:7a:ef
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:5053513 errors:0 dropped:0 overruns:0 frame:0
TX packets:4356866 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:383386585 (365.6 MiB) TX bytes:6240025821 (5.8 GiB)
There is also package called "balance":
/Description: Load balancing solution and generic tcp proxy
Balance is a load balancing solution being a simple but powerful
generic tcp
proxy with round robin load balancing and failover mechanisms. Its
behaviour
can be controlled at runtime using a simple command line syntax./
Regards,
Tom
On 20.07.2011 17:25, Freddie Cash wrote:
> No matter which bonding method you use, traffic between 1 client and
> the VM will go across one interface, thus limiting the traffic to 1 Gbps.
>
> All bonding does is allow you to have multiple 1 Gbps connections
> between multiple clients and the VM. Each connection is limited to 1
> Gbps, but you can have multiple connections going at once (each
> connection goes across a separate interface, managed by the bonding
> protocol).
>
> If you need more than 1 Gbps of throughput for a single connection,
> then you need a 10 Gbps (or faster) link. AFAIK, there's no support
> for 10 Gbps interfaces in KVM.
>
>
>
>
>
> On Wed, Jul 20, 2011 at 8:00 AM, TooMeeK <toomeek_85@o2.pl
> <mailto:toomeek_85@o2.pl>> wrote:
>
>
> Hello,
>
> I've been playing around with KVM since few years.
> But now I'm wondering is it possible to mix bonding+bridging
> together to reach more than single Gigabit link between Client and VM?
> Looking over net, everyone says to use LACP.. but I did it already
> and it worked, but still at 1 NIC speed.
>
> This is my working setup on Debian Squeeze 64bit:
>
> *cat /proc/net/bonding/bond0*
> /Ethernet Channel Bonding Driver: v3.5.0 (November 4, 2008)
>
> Bonding Mode: load balancing (round-robin)
> MII Status: up
> MII Polling Interval (ms): 100
> Up Delay (ms): 200
> Down Delay (ms): 200
>
> Slave Interface: eth1
> MII Status: up
> Link Failure Count: 0
> Permanent HW addr: 00:11:22:33:44:55
>
> Slave Interface: eth2
> MII Status: up
> Link Failure Count: 0
> Permanent HW addr: 55:44:33:22:11:00/
>
> *cat /etc/network/interfaces*
> /auto lo
> iface lo inet loopback
>
> # The bonded network interface for LAN
> auto bond0
> iface bond0 inet manual
> bond-slaves none
> bond-mode balance-rr
> bond-miimon 100
> #bond_lacp_rate fast
> #bond_ad_select 0
> arp_interval 80
> up /sbin/ifenslave bond0 eth1 eth2
> down /sbin/ifenslave bond0 -d eth1 eth2
>
> #Onboard NIC #1 Nvidia Gigabit
> auto eth1
> iface eth1 inet manual
> bond-master bond0
>
> #NIC #2 Intel PRO/1000 F Server Adapter - FIBER
> auto eth2
> iface eth2 inet manual
> bond-master bond0
>
> # Bridge to LAN for virtual network KVM
> auto br0
> iface br0 inet static
> address 10.0.0.250
> netmask 255.255.255.0
> network 10.0.0.0
> broadcast 10.0.0.255
> gateway 10.0.0.249
> dns-nameservers 10.0.0.249 8.8.8.8
> bridge-ports bond0
> bridge-fd 9
> bridge-hello 2
> bridge-maxage 12
> bridge-stp off
>
> #NIC #3 - modem
> auto eth0
> iface eth0 inet manual
>
> #Bridge LAN to virtual network KVM - modem
> iface br1 inet manual
> bridge_ports eth0
> bridge_stp off
> bridge_fd 0
> bridge_maxwait 0
> metric 1
> auto br1/
>
> *cat /etc/modprobe.d/bonding.conf*
> /alias bond0 bonding
> options bonding mode=balance-rr miimon=100 downdelay=200
> updelay=200 arp_interval=80/
>
> I've tried following already (single switch, not multiple):
> - LACP in Debian + LACP on the switch
> - static bond0 (round-robin) + static link aggregation on the
> switch for both Client and Hypervisor
> - tried several switches (HP V1910, 3Com 3824 and Planet GSD-802S)
> - tried several NICs, including Intel PRO/1000 F and MF fiber adapters
> - for example I can reach ~1,9Gbit between two non-virtualised
> servers using 3Com 3824 and NO link aggregation configured on the
> switch
> - I already reached almost native (940Mbit/s) from Client to VM
> using virtio-net and Debian.
> - tests using iperf, iSCSI, NFS. To avoid I/O limits - using ramdisks.
>
> Questions:.
> - is it even possible?
> - maybe I have to create MORE bridge interfaces, one per NIC and
> set up aggregated link inside VM then?
> - can bridge interface limit bandwidth to 1Gbit?
>
> Regards,
> Tom
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> <mailto:majordomo@vger.kernel.org>
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
>
>
> --
> Freddie Cash
> fjwcash@gmail.com <mailto:fjwcash@gmail.com>
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: Possible to reach more than 1Gbit to VM?
[not found] <4E282E4C.2040400@o2.pl>
@ 2011-07-21 13:51 ` TooMeeK
0 siblings, 0 replies; 3+ messages in thread
From: TooMeeK @ 2011-07-21 13:51 UTC (permalink / raw)
To: KVM list
I still cannot belive this.
This is test under bonding (round-robind) and bridging (bond0 -> br0 to
expose VM to LAN)
From VM Debian with one Virtio NIC to Debian Hypervisor I have (1, 2
and 3 connections at once):
user@vhost:~$ iperf -c 10.0.0.250
------------------------------------------------------------
Client connecting to 10.0.0.250, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 3] local 10.0.0.248 port 50679 connected with 10.0.0.250 port 5001
[ ID] Interval Transfer Bandwidth
*[ 3] 0.0-10.0 sec 1.65 GBytes 1.42 Gbits/sec*
user@vhost:~$ iperf -c 10.0.0.250 -P 2
------------------------------------------------------------
Client connecting to 10.0.0.250, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 4] local 10.0.0.248 port 50681 connected with 10.0.0.250 port 5001
[ 3] local 10.0.0.248 port 50680 connected with 10.0.0.250 port 5001
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 831 MBytes 696 Mbits/sec
[ 3] 0.0-10.0 sec 828 MBytes 694 Mbits/sec
*[SUM] 0.0-10.0 sec 1.62 GBytes 1.39 Gbits/sec*
user@vhost:~$ iperf -c 10.0.0.250 -P 3
------------------------------------------------------------
Client connecting to 10.0.0.250, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 5] local 10.0.0.248 port 54269 connected with 10.0.0.250 port 5001
[ 4] local 10.0.0.248 port 54268 connected with 10.0.0.250 port 5001
[ 3] local 10.0.0.248 port 54267 connected with 10.0.0.250 port 5001
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-10.0 sec 528 MBytes 443 Mbits/sec
[ 4] 0.0-10.0 sec 540 MBytes 453 Mbits/sec
[ 3] 0.0-10.0 sec 553 MBytes 464 Mbits/sec
*[SUM] 0.0-10.0 sec 1.58 GBytes 1.36 Gbits/sec*
And from Hypervisor to that VM I have:
user@hypervisor:~$ iperf -c 10.0.0.248
------------------------------------------------------------
Client connecting to 10.0.0.248, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 3] local 10.0.0.250 port 35318 connected with 10.0.0.248 port 5001
[ ID] Interval Transfer Bandwidth
*[ 3] 0.0-10.0 sec 2.40 GBytes 2.06 Gbits/sec*
user@hypervisor:~$ iperf -c 10.0.0.248 -P 2
------------------------------------------------------------
Client connecting to 10.0.0.248, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 4] local 10.0.0.250 port 35320 connected with 10.0.0.248 port 5001
[ 3] local 10.0.0.250 port 35319 connected with 10.0.0.248 port 5001
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 1.09 GBytes 933 Mbits/sec
[ 3] 0.0-10.0 sec 1.34 GBytes 1.15 Gbits/sec
*[SUM] 0.0-10.0 sec 2.43 GBytes 2.08 Gbits/sec*
user@hypervisor:~$ iperf -c 10.0.0.248 -P 3
------------------------------------------------------------
Client connecting to 10.0.0.248, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 3] local 10.0.0.250 port 35323 connected with 10.0.0.248 port 5001
[ 5] local 10.0.0.250 port 35322 connected with 10.0.0.248 port 5001
[ 4] local 10.0.0.250 port 35321 connected with 10.0.0.248 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 827 MBytes 693 Mbits/sec
[ 5] 0.0-10.0 sec 841 MBytes 705 Mbits/sec
[ 4] 0.0-10.0 sec 823 MBytes 690 Mbits/sec
*[SUM] 0.0-10.0 sec 2.43 GBytes 2.09 Gbits/sec*
And this is just AMD Phenom II X2 550 @ 3.1 GHz. Xeon's probably can do
much more...
Where is the trick to get raw 2Gbits/sec from LAN to VM?
Maybe direct attach NICs to VM and then bond them?
Regards,
Tom
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2011-07-21 13:51 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <4E26BCC7.3070503@o2.pl>
2011-07-20 15:00 ` Possible to reach more than 1Gbit to VM? TooMeeK
[not found] ` <CAOjFWZ57--PffFXQdbYNfBXfnZoBWZaDq9gUM7=w8Ycw3JcDuw@mail.gmail.com>
2011-07-20 15:56 ` TooMeeK
[not found] <4E282E4C.2040400@o2.pl>
2011-07-21 13:51 ` TooMeeK
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox