From: Rick Jones <rick.jones2@hpe.com>
To: Phil Sutter <phil@nwl.cc>,
Nicolas Dichtel <nicolas.dichtel@6wind.com>,
"Eric W. Biederman" <ebiederm@xmission.com>,
Stephen Hemminger <shemming@brocade.com>,
netdev@vger.kernel.org
Subject: Re: [iproute PATCH 0/2] Netns performance improvements
Date: Thu, 7 Jul 2016 09:16:20 -0700 [thread overview]
Message-ID: <577E8054.6040603@hpe.com> (raw)
In-Reply-To: <20160707154809.GN620@orbyte.nwl.cc>
On 07/07/2016 08:48 AM, Phil Sutter wrote:
> On Thu, Jul 07, 2016 at 02:59:48PM +0200, Nicolas Dichtel wrote:
>> Le 07/07/2016 13:17, Phil Sutter a écrit :
>> [snip]
>>> The issue came up during OpenStack Neutron testing, see this ticket for
>>> reference:
>>>
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1310795
>> Access to this ticket is not public :(
>
> *Sigh* OK, here are a few quotes:
>
> "OpenStack Neutron controller nodes, when undergoing testing, are
> locking up specifically during creation and mounting of namespaces.
> They appear to be blocking behind vfsmount_lock, and contention for the
> namespace_sem"
>
> "During the scale testing, we have 300 routers, 600 dhcp namespaces
> spread across four neutron network nodes. When then start as one set of
> standard Openstack Rally benchmark test cycle against neutron. An
> example scenario is creating 10x networks, list them, delete them and
> repeat 10x times. The second set performs an L3 benchmark test between
> two instances."
>
Those 300 routers will each have at least one namespace along with the
dhcp namespaces. Depending on the nature of the routers (Distributed
versus Centralized Virtual Routers - DVR vs CVR) and whether the routers
are supposed to be "HA" there can be more than one namespace for a given
router.
300 routers is far from the upper limit/goal. Back in HP Public Cloud,
we were running as many as 700 routers per network node (*), and more
than four network nodes. (back then it was just the one namespace per
router and network). Mileage will of course vary based on the "oomph" of
one's network node(s).
happy benchmarking,
rick jones
* Didn't want to go much higher than that because each router had a port
on a common linux bridge and getting to > 1024 would be an unpleasant day.
next prev parent reply other threads:[~2016-07-07 16:16 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-07-05 14:42 [iproute PATCH 0/2] Netns performance improvements Phil Sutter
2016-07-05 14:42 ` [iproute PATCH 1/2] ipnetns: Move NETNS_RUN_DIR into it's own propagation group Phil Sutter
2016-07-05 14:42 ` [iproute PATCH 2/2] ipnetns: Make netns mount points private Phil Sutter
2016-07-05 14:44 ` [iproute PATCH 0/2] Netns performance improvements Eric W. Biederman
2016-07-05 20:51 ` Phil Sutter
2016-07-07 4:58 ` Eric W. Biederman
2016-07-07 11:17 ` Phil Sutter
2016-07-07 12:59 ` Nicolas Dichtel
2016-07-07 15:48 ` Phil Sutter
2016-07-07 16:16 ` Rick Jones [this message]
2016-07-07 16:34 ` Eric W. Biederman
2016-07-07 17:28 ` Rick Jones
2016-07-08 8:12 ` Eric W. Biederman
2016-07-08 14:31 ` Brian Haley
2016-07-08 8:01 ` Nicolas Dichtel
2016-07-08 17:18 ` Rick Jones
2016-07-11 12:51 ` Nicolas Dichtel
2016-07-05 14:49 ` Phil Sutter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=577E8054.6040603@hpe.com \
--to=rick.jones2@hpe.com \
--cc=ebiederm@xmission.com \
--cc=netdev@vger.kernel.org \
--cc=nicolas.dichtel@6wind.com \
--cc=phil@nwl.cc \
--cc=shemming@brocade.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).