From: Benoit Lourdelet <blourdel@juniper.net>
To: "Eric W. Biederman" <ebiederm@xmission.com>,
Stephen Hemminger <stephen@networkplumber.org>
Cc: Serge Hallyn <serge.hallyn@ubuntu.com>,
"netdev@vger.kernel.org" <netdev@vger.kernel.org>
Subject: Re: [RFC][PATCH] iproute: Faster ip link add, set and delete
Date: Sat, 30 Mar 2013 10:09:51 +0000 [thread overview]
Message-ID: <CD7BAAE6.79D5%blourdel@juniper.net> (raw)
In-Reply-To: <87zjxn84ks.fsf@xmission.com>
Hello,
Here are my tests of the last patches on 3 different platforms all
running 3.8.5 :
Time are in seconds :
8x 3.7Ghz virtual cores
# veth create delete
1000 14 18
2000 39 56
5000 256 161
10000 1200 399
8x 3.2Ghz virtual cores
# veth create delete
1000 19 40
2000 118 66
5000 305 251
32x 2Ghz virtual cores , 2 sockets
# veth create delete
1000 35 86
2000 120 90
5000 724 245
Compared to initial iproute2 performance on this 32 virtual core system :
5000 1143 1185
"perf record" for creation of 5000 veth on the 32 core system :
# captured on: Fri Mar 29 14:03:35 2013
# hostname : ieng-serv06
# os release : 3.8.5
# perf version : 3.8.5
# arch : x86_64
# nrcpus online : 32
# nrcpus avail : 32
# cpudesc : Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz
# cpuid : GenuineIntel,6,45,7
# total memory : 264124548 kB
# cmdline : /usr/src/linux-3.8.5/tools/perf/perf record -a ./test3.script
# event : name = cycles, type = 0, config = 0x0, config1 = 0x0, config2 =
0x0, excl_usr = 0, excl_kern = 0, excl_host = 0, excl_guest = 1,
precise_ip = 0, id = { 36, 37, 38, 39, 40, 41, 42,
# HEADER_CPU_TOPOLOGY info available, use -I to display
# HEADER_NUMA_TOPOLOGY info available, use -I to display
# pmu mappings: cpu = 4, software = 1, uncore_pcu = 15, tracepoint = 2,
uncore_imc_0 = 17, uncore_imc_1 = 18, uncore_imc_2 = 19, uncore_imc_3 =
20, uncore_qpi_0 = 21, uncore_qpi_1 = 22, unco
# ========
#
# Samples: 9M of event 'cycles'
# Event count (approx.): 2894480238483
#
# Overhead Command Shared Object
Symbol
# ........ ............... .............................
...............................................
#
15.17% sudo [kernel.kallsyms] [k]
snmp_fold_field
5.94% sudo libc-2.15.so [.]
0x00000000000802cd
5.64% sudo [kernel.kallsyms] [k]
find_next_bit
3.21% init libnih.so.1.0.0 [.]
nih_list_add_after
2.12% swapper [kernel.kallsyms] [k] intel_idle
1.94% init [kernel.kallsyms] [k] page_fault
1.93% sed libc-2.15.so [.]
0x00000000000a1368
1.93% sudo [kernel.kallsyms] [k]
rtnl_fill_ifinfo
1.92% sudo [veth] [k]
veth_get_stats64
1.78% sudo [kernel.kallsyms] [k] memcpy
1.53% ifquery libc-2.15.so [.]
0x000000000007f52b
1.24% init libc-2.15.so [.]
0x000000000008918f
1.05% sudo [kernel.kallsyms] [k]
inet6_fill_ifla6_attrs
0.98% init [kernel.kallsyms] [k]
copy_pte_range
0.88% irqbalance libc-2.15.so [.]
0x00000000000802cd
0.85% sudo [kernel.kallsyms] [k] memset
0.72% sed ld-2.15.so [.]
0x000000000000a226
0.68% ifquery ld-2.15.so [.]
0x00000000000165a0
0.64% init libnih.so.1.0.0 [.]
nih_tree_next_post_full
0.61% bridge-network- libc-2.15.so [.]
0x0000000000131e2a
0.59% init [kernel.kallsyms] [k] do_wp_page
0.59% ifquery [kernel.kallsyms] [k] page_fault
0.54% sed [kernel.kallsyms] [k] page_fault
Regards
Benoit
On 29/03/2013 00:52, "Eric W. Biederman" <ebiederm@xmission.com> wrote:
>Stephen Hemminger <stephen@networkplumber.org> writes:
>
>> Try the following two patches. It adds a name hash list, and uses
>>Eric's idea
>> to avoid loading map on add/delete operations.
>
>On my microbenchmark of just creating 5000 veth pairs this takes pairs
>16s instead of 13s of my earlier hacks but that is well down in the
>usable range.
>
>Deleting all of those network interfaces one by one takes me 60s.
>
>So on the microbenchmark side this looks like a good improvement and
>pretty usable.
>
>I expect Benoit's container startup workload will also reflect this, but
>it will be interesting to see the actual result.
>
>Eric
>
>
>
next prev parent reply other threads:[~2013-03-30 10:12 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-03-22 22:23 [RFC][PATCH] iproute: Faster ip link add, set and delete Eric W. Biederman
2013-03-22 22:27 ` Stephen Hemminger
2013-03-26 11:51 ` Benoit Lourdelet
2013-03-26 12:40 ` Eric W. Biederman
2013-03-26 14:17 ` Serge Hallyn
2013-03-26 14:33 ` Serge Hallyn
2013-03-27 13:37 ` Benoit Lourdelet
2013-03-27 15:11 ` Eric W. Biederman
2013-03-27 17:47 ` Stephen Hemminger
2013-03-28 0:46 ` Eric W. Biederman
2013-03-28 3:20 ` Serge Hallyn
2013-03-28 3:44 ` Eric W. Biederman
2013-03-28 4:28 ` Serge Hallyn
2013-03-28 5:00 ` Eric W. Biederman
2013-03-28 13:36 ` Serge Hallyn
2013-03-28 13:42 ` Benoit Lourdelet
2013-03-28 15:04 ` Serge Hallyn
2013-03-28 15:21 ` Benoit Lourdelet
2013-03-28 22:20 ` Stephen Hemminger
2013-03-28 23:52 ` Eric W. Biederman
2013-03-29 0:13 ` Eric Dumazet
2013-03-29 0:25 ` Eric W. Biederman
2013-03-29 0:43 ` Eric Dumazet
2013-03-29 1:06 ` Eric W. Biederman
2013-03-29 1:10 ` Eric Dumazet
2013-03-29 1:29 ` Eric W. Biederman
2013-03-29 1:38 ` Eric Dumazet
2013-03-30 10:09 ` Benoit Lourdelet [this message]
2013-03-30 14:44 ` Eric Dumazet
2013-03-30 16:07 ` Benoit Lourdelet
2013-03-28 20:27 ` Benoit Lourdelet
2013-03-26 15:31 ` Eric Dumazet
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CD7BAAE6.79D5%blourdel@juniper.net \
--to=blourdel@juniper.net \
--cc=ebiederm@xmission.com \
--cc=netdev@vger.kernel.org \
--cc=serge.hallyn@ubuntu.com \
--cc=stephen@networkplumber.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).