From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
To: Eric Dumazet <eric.dumazet@gmail.com>
Cc: davem@davemloft.net, kuznet@ms2.inr.ac.ru, jmorris@namei.org,
yoshfuji@linux-ipv6.org, kaber@trash.net, jiri@resnulli.us,
edumazet@google.com, hannes@stressinduktion.org,
tom@herbertland.com, azhou@nicira.com, ebiederm@xmission.com,
ipm@chirality.org.uk, nicolas.dichtel@6wind.com,
netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
anton@au1.ibm.com, nacc@linux.vnet.ibm.com,
srikar@linux.vnet.ibm.com
Subject: Re: [PATCH RFC 0/2] Optimize the snmp stat aggregation for large cpus
Date: Tue, 25 Aug 2015 21:25:41 +0530 [thread overview]
Message-ID: <55DC8FFD.2020306@linux.vnet.ibm.com> (raw)
In-Reply-To: <1440513231.8932.14.camel@edumazet-glaptop2.roam.corp.google.com>
On 08/25/2015 08:03 PM, Eric Dumazet wrote:
> On Tue, 2015-08-25 at 13:24 +0530, Raghavendra K T wrote:
>> While creating 1000 containers, perf is showing lot of time spent in
>> snmp_fold_field on a large cpu system.
>>
>> The current patch tries to improve by reordering the statistics gathering.
>>
>> Please note that similar overhead was also reported while creating
>> veth pairs https://lkml.org/lkml/2013/3/19/556
>>
>> Setup:
>> 160 cpu (20 core) baremetal powerpc system with 1TB memory
>
> I wonder if these kind of results would demonstrate cache coloring
> problems on this host. Looks like all the per cpu data are colliding on
> same cache lines.
>
It could be. My testing on a 128 cpu system with less memory did not
incur huge time penalty for 1000 containers.
But snmp_fold_field in general had the problem.
for e.g. same experiment I had around 15% overhead for snmp_fold reduced
to 5% after the patch.
next prev parent reply other threads:[~2015-08-25 15:55 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-08-25 7:54 [PATCH RFC 0/2] Optimize the snmp stat aggregation for large cpus Raghavendra K T
2015-08-25 7:54 ` [PATCH RFC 1/2] net: Introduce helper functions to get the per cpu data Raghavendra K T
2015-08-25 7:54 ` [PATCH RFC 2/2] net: Optimize snmp stat aggregation by walking all the percpu data at once Raghavendra K T
2015-08-25 14:28 ` Eric Dumazet
2015-08-25 15:47 ` Raghavendra K T
2015-08-25 16:00 ` Eric Dumazet
2015-08-25 16:06 ` Raghavendra K T
2015-08-26 11:04 ` Raghavendra K T
2015-08-25 14:33 ` [PATCH RFC 0/2] Optimize the snmp stat aggregation for large cpus Eric Dumazet
2015-08-25 15:55 ` Raghavendra K T [this message]
2015-08-25 23:07 ` David Miller
2015-08-26 10:25 ` Raghavendra K T
2015-08-26 14:09 ` Eric Dumazet
2015-08-26 14:30 ` Raghavendra K T
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=55DC8FFD.2020306@linux.vnet.ibm.com \
--to=raghavendra.kt@linux.vnet.ibm.com \
--cc=anton@au1.ibm.com \
--cc=azhou@nicira.com \
--cc=davem@davemloft.net \
--cc=ebiederm@xmission.com \
--cc=edumazet@google.com \
--cc=eric.dumazet@gmail.com \
--cc=hannes@stressinduktion.org \
--cc=ipm@chirality.org.uk \
--cc=jiri@resnulli.us \
--cc=jmorris@namei.org \
--cc=kaber@trash.net \
--cc=kuznet@ms2.inr.ac.ru \
--cc=linux-kernel@vger.kernel.org \
--cc=nacc@linux.vnet.ibm.com \
--cc=netdev@vger.kernel.org \
--cc=nicolas.dichtel@6wind.com \
--cc=srikar@linux.vnet.ibm.com \
--cc=tom@herbertland.com \
--cc=yoshfuji@linux-ipv6.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).