netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Poor TCP bandwidth between network namespaces
@ 2013-02-04 14:43 Emmanuel Jeanvoine
  2013-02-04 22:52 ` Hannes Frederic Sowa
  0 siblings, 1 reply; 5+ messages in thread
From: Emmanuel Jeanvoine @ 2013-02-04 14:43 UTC (permalink / raw)
  To: netdev

Hello,

I'm trying to understand some performance issues when transferring 
data over TCP between two Linux network namespaces (with veth 
interfaces) on the same host.

Here is my approach:
I'm measuring the network performance thanks to netpipe-tcp (debian 
wheezy package) in two situations:
- using the loopback interface (i.e. launching netpipe client and 
server on the same node)
- using netpipe server inside a netns ans netpipe client inside 
another one, but both netns are on the same node.

This has been scripted in order to ease the reproducibility. This 
require to have an 'ip' utility that supports 'netns' argument and 
netpipe-tcp installed. Furthermore, this is using the 192.168.64/24 
network but this can changed if required. Here is the script:
#!/bin/sh
#This script has to be launched as root
#
###Reference measurement
echo "### Iperf execution without netns (localhost)"
NPtcp &
NPtcp -h localhost -o np-local
echo
###Netns measurement
#Prepare bridge and netns vnodes
brctl addbr br0
ip addr add 192.168.64.1/24 dev br0
ip link set br0 up
#First virtual node creation
ip link add name ext0 type veth peer name int0
ip link set ext0 up
brctl addif br0 ext0
ip netns add vnode0
ip link set dev int0 netns vnode0
ip netns exec vnode0 ip addr add 192.168.64.2/24 dev int0 
ip netns exec vnode0 ip link set dev int0 up
#Second virtual node creation
ip link add name ext1 type veth peer name int0
ip link set ext1 up
brctl addif br0 ext1
ip netns add vnode1
ip link set dev int0 netns vnode1
ip netns exec vnode1 ip addr add 192.168.64.3/24 dev int0
ip netns exec vnode1 ip link set dev int0 up
echo "### Iperf execution inside netns"
ip netns exec vnode0 NPtcp &
sleep 1
ip netns exec vnode1 NPtcp -h 192.168.64.2 -o np-netns
# Cleaning everything
ifconfig br0 down
brctl delbr br0
ip netns delete vnode0
ip netns delete vnode1


This experiment has been performed with 3.2 and 3.7 kernels, and here 
are the results:
- on a 3.2 kernel: 
http://www.loria.fr/~ejeanvoi/pub/netns-kernel-3.2.png
- on a 3.7 kernel: 
http://www.loria.fr/~ejeanvoi/pub/netns-kernel-3.7.png

I'm wondering why the overhead is so high when performing TCP 
transfers between two network namespaces. Do you have any idea about 
this issue? And possibly, how to increase the bandwidth (without 
modifying the MTU on the veths) between network namespaces?

Thanks in advance,
Emmanuel Jeanvoine.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Poor TCP bandwidth between network namespaces
  2013-02-04 14:43 Poor TCP bandwidth between network namespaces Emmanuel Jeanvoine
@ 2013-02-04 22:52 ` Hannes Frederic Sowa
  2013-02-09  1:33   ` Eric Dumazet
  0 siblings, 1 reply; 5+ messages in thread
From: Hannes Frederic Sowa @ 2013-02-04 22:52 UTC (permalink / raw)
  To: Emmanuel Jeanvoine; +Cc: netdev

On Mon, Feb 04, 2013 at 03:43:20PM +0100, Emmanuel Jeanvoine wrote:
> I'm wondering why the overhead is so high when performing TCP 
> transfers between two network namespaces. Do you have any idea about 
> this issue? And possibly, how to increase the bandwidth (without 
> modifying the MTU on the veths) between network namespaces?

You could try Eric's patch (already in net-next) and have a look at the rest
of the discussion:

http://article.gmane.org/gmane.linux.network/253589

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Poor TCP bandwidth between network namespaces
  2013-02-04 22:52 ` Hannes Frederic Sowa
@ 2013-02-09  1:33   ` Eric Dumazet
  2013-02-09  1:54     ` Rick Jones
  2013-02-09  2:17     ` Eric Dumazet
  0 siblings, 2 replies; 5+ messages in thread
From: Eric Dumazet @ 2013-02-09  1:33 UTC (permalink / raw)
  To: Hannes Frederic Sowa; +Cc: Emmanuel Jeanvoine, netdev

On Mon, 2013-02-04 at 23:52 +0100, Hannes Frederic Sowa wrote:
> On Mon, Feb 04, 2013 at 03:43:20PM +0100, Emmanuel Jeanvoine wrote:
> > I'm wondering why the overhead is so high when performing TCP 
> > transfers between two network namespaces. Do you have any idea about 
> > this issue? And possibly, how to increase the bandwidth (without 
> > modifying the MTU on the veths) between network namespaces?
> 
> You could try Eric's patch (already in net-next) and have a look at the rest
> of the discussion:
> 
> http://article.gmane.org/gmane.linux.network/253589

Another thing to consider is the default MTU value :

65536 for lo, and 1500 for veth

It easily explains half performance for veth

One another thing is the tx-nocache-copy setting, this can explain some
extra percents.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Poor TCP bandwidth between network namespaces
  2013-02-09  1:33   ` Eric Dumazet
@ 2013-02-09  1:54     ` Rick Jones
  2013-02-09  2:17     ` Eric Dumazet
  1 sibling, 0 replies; 5+ messages in thread
From: Rick Jones @ 2013-02-09  1:54 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: Hannes Frederic Sowa, Emmanuel Jeanvoine, netdev

On 02/08/2013 05:33 PM, Eric Dumazet wrote:
> On Mon, 2013-02-04 at 23:52 +0100, Hannes Frederic Sowa wrote:
>> On Mon, Feb 04, 2013 at 03:43:20PM +0100, Emmanuel Jeanvoine wrote:
>>> I'm wondering why the overhead is so high when performing TCP
>>> transfers between two network namespaces. Do you have any idea about
>>> this issue? And possibly, how to increase the bandwidth (without
>>> modifying the MTU on the veths) between network namespaces?
>>
>> You could try Eric's patch (already in net-next) and have a look at the rest
>> of the discussion:
>>
>> http://article.gmane.org/gmane.linux.network/253589
>
> Another thing to consider is the default MTU value :
>
> 65536 for lo, and 1500 for veth
>
> It easily explains half performance for veth
>
> One another thing is the tx-nocache-copy setting, this can explain some
> extra percents.

Whenever I want to avoid matters of MTU, I try going with a test that 
never sends anything larger than the smaller of the MTUs involved.  One 
such example might be (aggregate) netperf TCP_RR tests.  Matters of path 
length have a much more difficult time "hiding" from a TCP_RR (or 
UDP_RR) test than a bulk transfer test.

happy benchmarking,

rick jones

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Poor TCP bandwidth between network namespaces
  2013-02-09  1:33   ` Eric Dumazet
  2013-02-09  1:54     ` Rick Jones
@ 2013-02-09  2:17     ` Eric Dumazet
  1 sibling, 0 replies; 5+ messages in thread
From: Eric Dumazet @ 2013-02-09  2:17 UTC (permalink / raw)
  To: Hannes Frederic Sowa; +Cc: Emmanuel Jeanvoine, netdev

On Fri, 2013-02-08 at 17:33 -0800, Eric Dumazet wrote:

> Another thing to consider is the default MTU value :
> 
> 65536 for lo, and 1500 for veth
> 
> It easily explains half performance for veth
> 
> One another thing is the tx-nocache-copy setting, this can explain some
> extra percents.

By the way I get crashes when doing :

# ip link add name veth1 type veth peer name veth0
# rmmod veth

I'll submit a patch.

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2013-02-09  2:17 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-02-04 14:43 Poor TCP bandwidth between network namespaces Emmanuel Jeanvoine
2013-02-04 22:52 ` Hannes Frederic Sowa
2013-02-09  1:33   ` Eric Dumazet
2013-02-09  1:54     ` Rick Jones
2013-02-09  2:17     ` Eric Dumazet

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).