netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Daniel Lezcano <daniel.lezcano@free.fr>
To: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Daniel Lezcano <dlezcano@fr.ibm.com>,
	Linux Containers <containers@lists.osdl.org>,
	netdev@vger.kernel.org, Dmitry Mishin <dim@openvz.org>
Subject: Re: L2 network namespace benchmarking
Date: Wed, 28 Mar 2007 09:55:46 +0200	[thread overview]
Message-ID: <460A1F82.9090108@free.fr> (raw)
In-Reply-To: <m1ircmunre.fsf@ebiederm.dsl.xmission.com>

Eric W. Biederman wrote:
> Daniel Lezcano <dlezcano@fr.ibm.com> writes:
> 
>> 3. General observations
>> -----------------------
>>
>> The objective to have no performances degrations, when the network
>> namespace is off in the kernel, is reached in both solutions.
>>
>> When the network is used outside the container and the network
>> namespace are compiled in, there is no performance degradations.
>>
>> Eric's patchset allows to move network devices between namespaces and
>> this is clearly a good feature, missing in the Dmitry's patchset. This
>> feature helps us to see that the network namespace code does not add
>> overhead when using directly the physical network device into the
>> container.
> 
> Assuming these results are not contradicted this says that the extra
> dereference where we need it does not add measurable to the overhead
> in the Linus network stack.  Performance wise this should be good
> enough to allow merging the code into the linux kernel, as it does
> not measurably affect networking when we do not have multiple
> containers in use.

I have a few questions about merging code into the linux kernel.

* How do you plan to do that ?
* When do you expect to have the network namespace into mainline ?
* Are Dave Miller and Alexey Kuznetov aware of the network namespace ?
* Did they saw your patchset or ever know it exists ?
* Do you have any feedbacks from netdev about the network namespace ?

> 
> Things are good enough that we can even consider not providing
> an option to compile the support out.
> 
>> The loss of performances is very noticeable inside the container and
>> seems to be directly related to the usage of the pair device and the
>> specific network configuration needed for the container. When the
>> packets are sent by the container, the mac address is for the pair
>> device but the IP address is not owned by the host. That directly
>> implies to have the host to act as a router and the packets to be
>> forwarded. That adds a lot of overhead.
> 
> Well it adds measurable overhead.
> 
>> A hack has been made in the ip_forward function to avoid useless
>> skb_cow when using the pair device/tunnel device and the overhead
>> is reduced by the half.
> 
> To be fully satisfactory how we get the packets to the namespace
> still appears to need work.
> 
> We have overhead in routing.  That may simply be the cost of
> performing routing or there may be some optimizations opportunities
> there.
> We have about the same overhead when performing bridging which I
> actually find more surprising, as the bridging code should involve
> less packet handling.

Yep. I will try to figure out what is happening.

> Ideally we can optimize the bridge code or something equivalent to
> it so that we can take one look at the destination mac address and
> know which network namespace we should be in.  Potentially moving this
> work to hardware when the hardware supports multiple queues.
> 
> If we can get the overhead out of the routing code that would be
> tremendous.  However I think it may be more realistic to get the
> overhead out of the ethernet bridging code where we know we don't need
> to modify the packet.

The routing was optimized for the loopback, no ? Why can't we do the 
same for the etun device ?

  parent reply	other threads:[~2007-03-28  7:39 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-03-27 22:16 L2 network namespace benchmarking Daniel Lezcano
2007-03-27 23:08 ` Herbert Poetzl
2007-03-28  7:07   ` Daniel Lezcano
2007-03-28  2:04 ` Eric W. Biederman
2007-03-28  7:49   ` Kirill Korotaev
2007-03-28 12:06     ` Eric W. Biederman
2007-03-28  7:55   ` Daniel Lezcano [this message]
2007-03-28 11:52     ` Eric W. Biederman
2007-03-28 18:08       ` Rick Jones
2007-03-28 19:47         ` Daniel Lezcano
2007-03-28 20:12           ` Rick Jones
2007-03-29  7:37       ` Benjamin Thery
2007-03-29 13:01         ` Eric W. Biederman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=460A1F82.9090108@free.fr \
    --to=daniel.lezcano@free.fr \
    --cc=containers@lists.osdl.org \
    --cc=dim@openvz.org \
    --cc=dlezcano@fr.ibm.com \
    --cc=ebiederm@xmission.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).