public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* bridge with a bonded device - slow rate in the guest machine
@ 2011-05-20  9:12 Алексей Кашин
  2011-05-20 13:51 ` David Ahern
  0 siblings, 1 reply; 3+ messages in thread
From: Алексей Кашин @ 2011-05-20  9:12 UTC (permalink / raw)
  To: kvm

Hi.
Server with two gigabit NIC's. I'm trying to setup a bridge with a
bonded device (2 links,balance-rr).
host# cat /etc/network/interfaces
auto lo
iface lo inet loopback

auto bond0
iface bond0 inet manual
        slaves eth0 eth1
        bond_mode balance-rr
        bond_miimon 100
        bond_updelay 200
        bond_downdelay 200

auto br0
iface br0 inet static
        address <ip>
        netmask <netmask>
        gateway <gateway>
        bridge_ports bond0
        bridge_stp off
        bridge_fd 0
        bridge_maxwait 0

host# cat /etc/modprobe.d/bonding.conf
alias bond0 bonding

On this host I've created few kvm virtual machines. It's created with
next options.

host# virt-install \
        --name="name" \
        --ram=512 \
        --arch=x86_64 \
        --vcpus=1 \
        --cpuset=0 \
        --os-type=linux \
        --os-variant="debiansqueeze" \
        --hvm \
        --virt-type kvm \
        --accelerate \
        --cdrom=/iso/debian-6.0.1a-amd64-netinst.iso \
        --disk path=/dev/vg00/name,bus=virtio,cache=none,format=raw,sparse=false
\
        --network bridge=br0,model=virtio \
        --autostart

When I try to download a file I can see that the rate is very low:

guest# wget http://mirror.yandex.ru/archlinux/iso/2010.05/archlinux-2010.05-core-dual.iso
--2011-05-20 12:47:16--
http://mirror.yandex.ru/archlinux/iso/2010.05/archlinux-2010.05-core-dual.iso
Resolving mirror.yandex.ru... 213.180.204.183, 2a02:6b8:0:201::1
Connecting to mirror.yandex.ru|213.180.204.183|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 691011584 (659M) [application/x-iso9660-image]
Saving to: Б-°archlinux-2010.05-core-dual.isoБ-?

 0% [
                              ] 79,686      7.97K/s  eta 20h 24m

But, if i try to get this file from host machine, I can see that the
rate is normal:

host# wget http://mirror.yandex.ru/archlinux/iso/2010.05/archlinux-2010.05-core-dual.iso
--2011-05-20 08:56:35--
http://mirror.yandex.ru/archlinux/iso/2010.05/archlinux-2010.05-core-dual.iso
Resolving mirror.yandex.ru... 213.180.204.183, 2a02:6b8:0:201::1
Connecting to mirror.yandex.ru|213.180.204.183|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 691011584 (659M) [application/x-iso9660-image]
Saving to: Б-°archlinux-2010.05-core-dual.isoБ-?

21% [=================================>
                              ] 150,837,182 26.5M/s  eta 24s


If I broke bonding in the host machine and try to setup bridging on
device eth0 without bonding, the rate in both host and virtual machine
is normal.

Host machine:
CPU: model name      : Intel(R) Xeon(R) CPU           E5504  @ 2.00GHz
KVM:
# dpkg -l | grep kvm
ii  qemu-kvm                            0.12.5+dfsg-5+squeeze1
Full virtualization on x86 hardware
Kernel:
# uname -a
Linux unixmon 2.6.32-5-amd64 #1 SMP Mon Mar 7 21:35:22 UTC 2011 x86_64 GNU/Linux
Guests:
All guest - GNU/Debian 6.0.1a amd64 with 2.6.32-5-amd64 #1 SMP Mon Mar
7 21:35:22 UTC 2011 x86_64 GNU/Linux kernel

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: bridge with a bonded device - slow rate in the guest machine
  2011-05-20  9:12 bridge with a bonded device - slow rate in the guest machine Алексей Кашин
@ 2011-05-20 13:51 ` David Ahern
  2011-05-20 16:26   ` Алексей Кашин
  0 siblings, 1 reply; 3+ messages in thread
From: David Ahern @ 2011-05-20 13:51 UTC (permalink / raw)
  To: Алексей Кашин
  Cc: kvm



On 05/20/11 03:12, Алексей Кашин wrote:
> Hi.
> Server with two gigabit NIC's. I'm trying to setup a bridge with a
> bonded device (2 links,balance-rr).
> host# cat /etc/network/interfaces
> auto lo
> iface lo inet loopback
> 
> auto bond0
> iface bond0 inet manual
>         slaves eth0 eth1
>         bond_mode balance-rr
>         bond_miimon 100
>         bond_updelay 200
>         bond_downdelay 200
> 
> auto br0
> iface br0 inet static
>         address <ip>
>         netmask <netmask>
>         gateway <gateway>
>         bridge_ports bond0
>         bridge_stp off
>         bridge_fd 0
>         bridge_maxwait 0
> 
> host# cat /etc/modprobe.d/bonding.conf
> alias bond0 bonding
> 
> On this host I've created few kvm virtual machines. It's created with
> next options.
> 
> host# virt-install \
>         --name="name" \
>         --ram=512 \
>         --arch=x86_64 \
>         --vcpus=1 \
>         --cpuset=0 \
>         --os-type=linux \
>         --os-variant="debiansqueeze" \
>         --hvm \
>         --virt-type kvm \
>         --accelerate \
>         --cdrom=/iso/debian-6.0.1a-amd64-netinst.iso \
>         --disk path=/dev/vg00/name,bus=virtio,cache=none,format=raw,sparse=false
> \
>         --network bridge=br0,model=virtio \
>         --autostart
> 
> When I try to download a file I can see that the rate is very low:
> 
> guest# wget http://mirror.yandex.ru/archlinux/iso/2010.05/archlinux-2010.05-core-dual.iso
> --2011-05-20 12:47:16--
> http://mirror.yandex.ru/archlinux/iso/2010.05/archlinux-2010.05-core-dual.iso
> Resolving mirror.yandex.ru... 213.180.204.183, 2a02:6b8:0:201::1
> Connecting to mirror.yandex.ru|213.180.204.183|:80... connected.
> HTTP request sent, awaiting response... 200 OK
> Length: 691011584 (659M) [application/x-iso9660-image]
> Saving to: Б-°archlinux-2010.05-core-dual.isoБ-?
> 
>  0% [
>                               ] 79,686      7.97K/s  eta 20h 24m
> 
> But, if i try to get this file from host machine, I can see that the
> rate is normal:
> 
> host# wget http://mirror.yandex.ru/archlinux/iso/2010.05/archlinux-2010.05-core-dual.iso
> --2011-05-20 08:56:35--
> http://mirror.yandex.ru/archlinux/iso/2010.05/archlinux-2010.05-core-dual.iso
> Resolving mirror.yandex.ru... 213.180.204.183, 2a02:6b8:0:201::1
> Connecting to mirror.yandex.ru|213.180.204.183|:80... connected.
> HTTP request sent, awaiting response... 200 OK
> Length: 691011584 (659M) [application/x-iso9660-image]
> Saving to: Б-°archlinux-2010.05-core-dual.isoБ-?
> 
> 21% [=================================>
>                               ] 150,837,182 26.5M/s  eta 24s
> 
> 
> If I broke bonding in the host machine and try to setup bridging on
> device eth0 without bonding, the rate in both host and virtual machine
> is normal.

Have you tried active-backup mode instead of balanced-rr for the bonding
device? I have used that setup in the past and did not see any
performance issues with it.

David


> 
> Host machine:
> CPU: model name      : Intel(R) Xeon(R) CPU           E5504  @ 2.00GHz
> KVM:
> # dpkg -l | grep kvm
> ii  qemu-kvm                            0.12.5+dfsg-5+squeeze1
> Full virtualization on x86 hardware
> Kernel:
> # uname -a
> Linux unixmon 2.6.32-5-amd64 #1 SMP Mon Mar 7 21:35:22 UTC 2011 x86_64 GNU/Linux
> Guests:
> All guest - GNU/Debian 6.0.1a amd64 with 2.6.32-5-amd64 #1 SMP Mon Mar
> 7 21:35:22 UTC 2011 x86_64 GNU/Linux kernel
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: bridge with a bonded device - slow rate in the guest machine
  2011-05-20 13:51 ` David Ahern
@ 2011-05-20 16:26   ` Алексей Кашин
  0 siblings, 0 replies; 3+ messages in thread
From: Алексей Кашин @ 2011-05-20 16:26 UTC (permalink / raw)
  To: David Ahern; +Cc: kvm

Hi.
This problem was solved. I disabled option "large-receive-offload" in
the network cards.
Thanks.

2011/5/20, David Ahern <daahern@cisco.com>:
>
>
> On 05/20/11 03:12, Алексей Кашин wrote:
>> Hi.
>> Server with two gigabit NIC's. I'm trying to setup a bridge with a
>> bonded device (2 links,balance-rr).
>> host# cat /etc/network/interfaces
>> auto lo
>> iface lo inet loopback
>>
>> auto bond0
>> iface bond0 inet manual
>>         slaves eth0 eth1
>>         bond_mode balance-rr
>>         bond_miimon 100
>>         bond_updelay 200
>>         bond_downdelay 200
>>
>> auto br0
>> iface br0 inet static
>>         address <ip>
>>         netmask <netmask>
>>         gateway <gateway>
>>         bridge_ports bond0
>>         bridge_stp off
>>         bridge_fd 0
>>         bridge_maxwait 0
>>
>> host# cat /etc/modprobe.d/bonding.conf
>> alias bond0 bonding
>>
>> On this host I've created few kvm virtual machines. It's created with
>> next options.
>>
>> host# virt-install \
>>         --name="name" \
>>         --ram=512 \
>>         --arch=x86_64 \
>>         --vcpus=1 \
>>         --cpuset=0 \
>>         --os-type=linux \
>>         --os-variant="debiansqueeze" \
>>         --hvm \
>>         --virt-type kvm \
>>         --accelerate \
>>         --cdrom=/iso/debian-6.0.1a-amd64-netinst.iso \
>>         --disk
>> path=/dev/vg00/name,bus=virtio,cache=none,format=raw,sparse=false
>> \
>>         --network bridge=br0,model=virtio \
>>         --autostart
>>
>> When I try to download a file I can see that the rate is very low:
>>
>> guest# wget
>> http://mirror.yandex.ru/archlinux/iso/2010.05/archlinux-2010.05-core-dual.iso
>> --2011-05-20 12:47:16--
>> http://mirror.yandex.ru/archlinux/iso/2010.05/archlinux-2010.05-core-dual.iso
>> Resolving mirror.yandex.ru... 213.180.204.183, 2a02:6b8:0:201::1
>> Connecting to mirror.yandex.ru|213.180.204.183|:80... connected.
>> HTTP request sent, awaiting response... 200 OK
>> Length: 691011584 (659M) [application/x-iso9660-image]
>> Saving to: Б-°archlinux-2010.05-core-dual.isoБ-?
>>
>>  0% [
>>                               ] 79,686      7.97K/s  eta 20h 24m
>>
>> But, if i try to get this file from host machine, I can see that the
>> rate is normal:
>>
>> host# wget
>> http://mirror.yandex.ru/archlinux/iso/2010.05/archlinux-2010.05-core-dual.iso
>> --2011-05-20 08:56:35--
>> http://mirror.yandex.ru/archlinux/iso/2010.05/archlinux-2010.05-core-dual.iso
>> Resolving mirror.yandex.ru... 213.180.204.183, 2a02:6b8:0:201::1
>> Connecting to mirror.yandex.ru|213.180.204.183|:80... connected.
>> HTTP request sent, awaiting response... 200 OK
>> Length: 691011584 (659M) [application/x-iso9660-image]
>> Saving to: Б-°archlinux-2010.05-core-dual.isoБ-?
>>
>> 21% [=================================>
>>                               ] 150,837,182 26.5M/s  eta 24s
>>
>>
>> If I broke bonding in the host machine and try to setup bridging on
>> device eth0 without bonding, the rate in both host and virtual machine
>> is normal.
>
> Have you tried active-backup mode instead of balanced-rr for the bonding
> device? I have used that setup in the past and did not see any
> performance issues with it.
>
> David
>
>
>>
>> Host machine:
>> CPU: model name      : Intel(R) Xeon(R) CPU           E5504  @ 2.00GHz
>> KVM:
>> # dpkg -l | grep kvm
>> ii  qemu-kvm                            0.12.5+dfsg-5+squeeze1
>> Full virtualization on x86 hardware
>> Kernel:
>> # uname -a
>> Linux unixmon 2.6.32-5-amd64 #1 SMP Mon Mar 7 21:35:22 UTC 2011 x86_64
>> GNU/Linux
>> Guests:
>> All guest - GNU/Debian 6.0.1a amd64 with 2.6.32-5-amd64 #1 SMP Mon Mar
>> 7 21:35:22 UTC 2011 x86_64 GNU/Linux kernel
>> --
>> To unsubscribe from this list: send the line "unsubscribe kvm" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2011-05-20 16:26 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-05-20  9:12 bridge with a bonded device - slow rate in the guest machine Алексей Кашин
2011-05-20 13:51 ` David Ahern
2011-05-20 16:26   ` Алексей Кашин

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox