From: Daniel Lezcano <dlezcano@fr.ibm.com>
To: Herbert Poetzl <herbert@13thfloor.at>
Cc: Daniel Lezcano <dlezcano@fr.ibm.com>,
Linux Containers <containers@lists.osdl.org>,
Dmitry Mishin <dim@openvz.org>,
"Eric W. Biederman" <ebiederm@xmission.com>,
netdev@vger.kernel.org
Subject: Re: L2 network namespace benchmarking
Date: Wed, 28 Mar 2007 09:07:56 +0200 [thread overview]
Message-ID: <460A144C.7070800@fr.ibm.com> (raw)
In-Reply-To: <20070327230827.GA22649@MAIL.13thfloor.at>
Herbert Poetzl wrote:
> On Wed, Mar 28, 2007 at 12:16:34AM +0200, Daniel Lezcano wrote:
>> Hi,
[ cut ]
>> 3. General observations
>> -----------------------
>>
>> The objective to have no performances degrations, when the network
>> namespace is off in the kernel, is reached in both solutions.
>>
>> When the network is used outside the container and the network
>> namespace are compiled in, there is no performance degradations.
>>
>> Eric's patchset allows to move network devices between namespaces and
>> this is clearly a good feature, missing in the Dmitry's patchset. This
>> feature helps us to see that the network namespace code does not add
>> overhead when using directly the physical network device into the
>> container.
>>
>> The loss of performances is very noticeable inside the container and
>> seems to be directly related to the usage of the pair device and the
>> specific network configuration needed for the container. When the
>> packets are sent by the container, the mac address is for the pair
>> device but the IP address is not owned by the host. That directly
>> implies to have the host to act as a router and the packets to be
>> forwarded. That adds a lot of overhead.
>>
>> A hack has been made in the ip_forward function to avoid useless
>> skb_cow when using the pair device/tunnel device and the overhead
>> is reduced by the half.
>
> would it be possible to do some tests regarding scalability?
>
> i.e. I would be interested how the following would look like:
>
> 10 connections on a single host (in parallel, overall performance)
> 10 connections from the same net space
> 10 connections from 10 different net spaces
> (i.e. one connection from each space)
>
> we can assume that L3 isolation will give similar results to
> the first case, but if needed, we can provide a patch to
> test this too ...
>
Ok. Assuming, Eric's and Dmitry's patchset are very similar, I will
focus on the Eric's patchset because it is more mature and more easy to
setup. I will have a look on the bridge optimization before doing that.
>
> PS: great work! tx!
>
Thanks.
next prev parent reply other threads:[~2007-03-28 6:52 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-03-27 22:16 L2 network namespace benchmarking Daniel Lezcano
2007-03-27 23:08 ` Herbert Poetzl
2007-03-28 7:07 ` Daniel Lezcano [this message]
2007-03-28 2:04 ` Eric W. Biederman
2007-03-28 7:49 ` Kirill Korotaev
2007-03-28 12:06 ` Eric W. Biederman
2007-03-28 7:55 ` Daniel Lezcano
2007-03-28 11:52 ` Eric W. Biederman
2007-03-28 18:08 ` Rick Jones
2007-03-28 19:47 ` Daniel Lezcano
2007-03-28 20:12 ` Rick Jones
2007-03-29 7:37 ` Benjamin Thery
2007-03-29 13:01 ` Eric W. Biederman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=460A144C.7070800@fr.ibm.com \
--to=dlezcano@fr.ibm.com \
--cc=containers@lists.osdl.org \
--cc=dim@openvz.org \
--cc=ebiederm@xmission.com \
--cc=herbert@13thfloor.at \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).