From: Emmanuel Jeanvoine <emmanuel.jeanvoine@inria.fr>
To: netdev@vger.kernel.org
Subject: Poor TCP bandwidth between network namespaces
Date: Mon, 4 Feb 2013 15:43:20 +0100 [thread overview]
Message-ID: <20130204144320.GG1353@tostaky> (raw)
Hello,
I'm trying to understand some performance issues when transferring
data over TCP between two Linux network namespaces (with veth
interfaces) on the same host.
Here is my approach:
I'm measuring the network performance thanks to netpipe-tcp (debian
wheezy package) in two situations:
- using the loopback interface (i.e. launching netpipe client and
server on the same node)
- using netpipe server inside a netns ans netpipe client inside
another one, but both netns are on the same node.
This has been scripted in order to ease the reproducibility. This
require to have an 'ip' utility that supports 'netns' argument and
netpipe-tcp installed. Furthermore, this is using the 192.168.64/24
network but this can changed if required. Here is the script:
#!/bin/sh
#This script has to be launched as root
#
###Reference measurement
echo "### Iperf execution without netns (localhost)"
NPtcp &
NPtcp -h localhost -o np-local
echo
###Netns measurement
#Prepare bridge and netns vnodes
brctl addbr br0
ip addr add 192.168.64.1/24 dev br0
ip link set br0 up
#First virtual node creation
ip link add name ext0 type veth peer name int0
ip link set ext0 up
brctl addif br0 ext0
ip netns add vnode0
ip link set dev int0 netns vnode0
ip netns exec vnode0 ip addr add 192.168.64.2/24 dev int0
ip netns exec vnode0 ip link set dev int0 up
#Second virtual node creation
ip link add name ext1 type veth peer name int0
ip link set ext1 up
brctl addif br0 ext1
ip netns add vnode1
ip link set dev int0 netns vnode1
ip netns exec vnode1 ip addr add 192.168.64.3/24 dev int0
ip netns exec vnode1 ip link set dev int0 up
echo "### Iperf execution inside netns"
ip netns exec vnode0 NPtcp &
sleep 1
ip netns exec vnode1 NPtcp -h 192.168.64.2 -o np-netns
# Cleaning everything
ifconfig br0 down
brctl delbr br0
ip netns delete vnode0
ip netns delete vnode1
This experiment has been performed with 3.2 and 3.7 kernels, and here
are the results:
- on a 3.2 kernel:
http://www.loria.fr/~ejeanvoi/pub/netns-kernel-3.2.png
- on a 3.7 kernel:
http://www.loria.fr/~ejeanvoi/pub/netns-kernel-3.7.png
I'm wondering why the overhead is so high when performing TCP
transfers between two network namespaces. Do you have any idea about
this issue? And possibly, how to increase the bandwidth (without
modifying the MTU on the veths) between network namespaces?
Thanks in advance,
Emmanuel Jeanvoine.
next reply other threads:[~2013-02-04 14:51 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-02-04 14:43 Emmanuel Jeanvoine [this message]
2013-02-04 22:52 ` Poor TCP bandwidth between network namespaces Hannes Frederic Sowa
2013-02-09 1:33 ` Eric Dumazet
2013-02-09 1:54 ` Rick Jones
2013-02-09 2:17 ` Eric Dumazet
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130204144320.GG1353@tostaky \
--to=emmanuel.jeanvoine@inria.fr \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).