netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: ebiederm@xmission.com (Eric W. Biederman)
To: Ryousei Takano <ryousei@gmail.com>
Cc: Daniel Lezcano <dlezcano@fr.ibm.com>,
	Linux Containers <containers@lists.osdl.org>,
	Linux Netdev List <netdev@vger.kernel.org>,
	lxc-devel@lists.sourceforge.net
Subject: Re: [lxc-devel] Poor bridging performance on 10 GbE
Date: Wed, 18 Mar 2009 17:50:16 -0700	[thread overview]
Message-ID: <m1wsamcp4n.fsf@fess.ebiederm.org> (raw)
In-Reply-To: <b30d1c3b0903180856r6dce554ap35b8d6a3ee7829e0@mail.gmail.com> (Ryousei Takano's message of "Thu\, 19 Mar 2009 00\:56\:53 +0900")

Ryousei Takano <ryousei@gmail.com> writes:

> I am using VServer because other virtualization mechanisms, including OpenVZ,
> Xen, and KVM cannot fully utilize the network bandwidth of 10 GbE.
>
> Here are the results of netperf bencmark:
> 	vanilla (2.6.27-9)		9525.94
> 	Vserver (2.6.27.10)	9521.79
> 	OpenVZ (2.6.27.10)	2049.89
> 	Xen (2.6.26.1)		1011.47
> 	KVM (2.6.27-9)		1022.42
>
> Now I am interesting to use LXC instead of VServer.

A good argument.

>>> Using a macvlan device, the throughput was 9.6 Gbps. But, using a veth
>>> device,
>>> the throughput was only 2.7 Gbps.
>>
>> Yeah, definitively the macvlan interfaces is the best in terms of
>> performances but with the restriction of not being able to communicate
>> between containers on the same hosts.
>>
> This restriction is not so big issue for my purpose.

Right.  I have been trying to figure out what the best way to cope
with that restriction is.

>>> I also checked the host OS's performance when I used a veth device.
>>> I observed a strange phenomenon.
>>>
>>> Before issuing lxc-start command, the throughput was 9.6 Gbps.
>>> Here is the output of brctl show:
>>>        $ brctl show
>>>        bridge name     bridge id               STP enabled     interfaces
>>>        br0             8000.0060dd470d49       no              eth1
>>>
>>> After issuing lxc-start command, the throughput decreased to 3.2 Gbps.
>>> Here is the output of brctl show:
>>>        $ sudo brctl show
>>>        bridge name     bridge id               STP enabled     interfaces
>>>        br0             8000.0060dd470d49       no              eth1
>>>                                                                veth0_7573
>>>
>>> I wonder why the performance is greatly influenced by adding a veth device
>>> to a bridge device.
>>
>> Hmm, good question :)

Bridging last I looked uses the least common denominator of hardware
offloads.  Which likely explains why adding a veth decreased your
bridging performance.

>>> Here is my experimental setting:
>>>        OS: Ubuntu server 8.10 amd64
>>>        Kernel: 2.6.27-rc8 (checkout from the lxc git repository)
>>
>> I would recommend to use the 2.6.29-rc8 vanilla because this kernel does no
>> longer need patches, a lot of fixes were done in the network namespace and
>> maybe the bridge has been improved in the meantime :)
>>
> I checked out the 2.6.29-rc8 vanilla kernel.
> The performance after issuing lxc-start improved to 8.7 Gbps!
> It's a big improvement, while some performance loss remains.
> Can not we avoid this loss?

Good question.  Any chance you can profile this and see where the
performance loss seems to be coming from?

Eric

  reply	other threads:[~2009-03-19  0:50 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <b30d1c3b0903180221h5175618eue162ffdec3817b4c@mail.gmail.com>
2009-03-18 10:10 ` [lxc-devel] Poor bridging performance on 10 GbE Daniel Lezcano
2009-03-18 15:56   ` Ryousei Takano
2009-03-19  0:50     ` Eric W. Biederman [this message]
2009-03-19  5:37       ` Ryousei Takano
2009-03-19  9:08         ` Daniel Lezcano
2009-03-19 10:50           ` Ryousei Takano

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=m1wsamcp4n.fsf@fess.ebiederm.org \
    --to=ebiederm@xmission.com \
    --cc=containers@lists.osdl.org \
    --cc=dlezcano@fr.ibm.com \
    --cc=lxc-devel@lists.sourceforge.net \
    --cc=netdev@vger.kernel.org \
    --cc=ryousei@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).