netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
To: <davem@davemloft.net>, <kuznet@ms2.inr.ac.ru>,
	<jmorris@namei.org>, <yoshfuji@linux-ipv6.org>, <kaber@trash.net>
Cc: <jiri@resnulli.us>, <edumazet@google.com>,
	<hannes@stressinduktion.org>, <tom@herbertland.com>,
	<azhou@nicira.com>, <ebiederm@xmission.com>,
	<ipm@chirality.org.uk>, <nicolas.dichtel@6wind.com>,
	<serge.hallyn@canonical.com>, <joe@perches.com>,
	<netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
	<raghavendra.kt@linux.vnet.ibm.com>, <anton@au1.ibm.com>,
	<nacc@linux.vnet.ibm.com>, <srikar@linux.vnet.ibm.com>
Subject: [PATCH RFC V4 0/2] Optimize the snmp stat aggregation for large cpus
Date: Sun, 30 Aug 2015 11:29:40 +0530	[thread overview]
Message-ID: <1440914382-23126-1-git-send-email-raghavendra.kt@linux.vnet.ibm.com> (raw)
In-Reply-To: <y>

While creating 1000 containers, perf is showing lot of time spent in
snmp_fold_field on a large cpu system.

The current patch tries to improve by reordering the statistics gathering.

Please note that similar overhead was also reported while creating
veth pairs  https://lkml.org/lkml/2013/3/19/556

Changes in V4:
 - remove 'item' variable and use IPSTATS_MIB_MAX to avoid sparse
   warning (Eric) also remove 'item' parameter (Joe)
 - add missing memset of padding. 

Changes in V3:
 - use memset to initialize temp buffer in leaf function. (David)
 - use memcpy to copy the buffer data to stat instead of unalign_pu (Joe)
 - Move buffer definition to leaf function __snmp6_fill_stats64() (Eric)
 -
Changes in V2:
 - Allocate the stat calculation buffer in stack. (Eric)
 
Setup:
160 cpu (20 core) baremetal powerpc system with 1TB memory

1000 docker containers was created with command
docker run -itd  ubuntu:15.04  /bin/bash in loop

observation:
Docker container creation linearly increased from around 1.6 sec to 7.5 sec
(at 1000 containers) perf data showed, creating veth interfaces resulting in
the below code path was taking more time.

rtnl_fill_ifinfo
  -> inet6_fill_link_af
    -> inet6_fill_ifla6_attrs
      -> snmp_fold_field

proposed idea:
 currently __snmp6_fill_stats64 calls snmp_fold_field that walks
through per cpu data to of an item (iteratively for around 36 items).
 The patch tries to aggregate the statistics by going through
all the items of each cpu sequentially which is reducing cache
misses.

Performance of docker creation improved by around more than 2x
after the patch.

before the patch: 
================
#time docker run -itd  ubuntu:15.04  /bin/bash
3f45ba571a42e925c4ec4aaee0e48d7610a9ed82a4c931f83324d41822cf6617
real	0m6.836s
user	0m0.095s
sys	0m0.011s

perf record -a docker run -itd  ubuntu:15.04  /bin/bash
=======================================================
# Samples: 32K of event 'cycles'
# Event count (approx.): 24688700190
# Overhead  Command          Shared Object           Symbol                                                                                         
# ........  ...............  ......................  ........................
    50.73%  docker           [kernel.kallsyms]       [k] snmp_fold_field                                                                                                        
     9.07%  swapper          [kernel.kallsyms]       [k] snooze_loop                                                                                                            
     3.49%  docker           [kernel.kallsyms]       [k] veth_stats_one                                                                                                         
     2.85%  swapper          [kernel.kallsyms]       [k] _raw_spin_lock                                                                                                         
     1.37%  docker           docker                  [.] backtrace_qsort                                                                                                        
     1.31%  docker           docker                  [.] strings.FieldsFunc                                                                      

  cache-misses:  2.7%
                                                      
after the patch:
=============
#time docker run -itd  ubuntu:15.04  /bin/bash
9178273e9df399c8290b6c196e4aef9273be2876225f63b14a60cf97eacfafb5
real	0m3.249s
user	0m0.088s
sys	0m0.020s

perf record -a docker run -itd  ubuntu:15.04  /bin/bash
=======================================================
# Samples: 18K of event 'cycles'
# Event count (approx.): 13958958797
# Overhead  Command          Shared Object         Symbol                                                                                
# ........  ...............  ....................  ..........................
    10.57%  docker           docker                [.] scanblock                                                                         
     8.37%  swapper          [kernel.kallsyms]     [k] snooze_loop                                                                       
     6.91%  docker           [kernel.kallsyms]     [k] snmp_get_cpu_field                                                                
     6.67%  docker           [kernel.kallsyms]     [k] veth_stats_one                                                                    
     3.96%  docker           docker                [.] runtime_MSpan_Sweep                                                               
     2.47%  docker           docker                [.] strings.FieldsFunc                                                                
 
cache-misses: 1.41 %

Please let me know if you have suggestions/comments.
Thanks Eric, Joe and David for the comments.

Raghavendra K T (2):
  net: Introduce helper functions to get the per cpu data
  net: Optimize snmp stat aggregation by walking all the percpu data at
    once

 include/net/ip.h    | 10 ++++++++++
 net/ipv4/af_inet.c  | 41 +++++++++++++++++++++++++++--------------
 net/ipv6/addrconf.c | 26 ++++++++++++++++----------
 3 files changed, 53 insertions(+), 24 deletions(-)

-- 
1.7.11.7

             reply	other threads:[~2015-08-30  5:59 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-08-30  5:59 Raghavendra K T [this message]
2015-08-30  5:59 ` [PATCH RFC V4 1/2] net: Introduce helper functions to get the per cpu data Raghavendra K T
2015-08-30  5:59 ` [PATCH RFC V4 2/2] net: Optimize snmp stat aggregation by walking all the percpu data at once Raghavendra K T
2015-08-30 15:52   ` Eric Dumazet
2015-08-31  4:49 ` [PATCH RFC V4 0/2] Optimize the snmp stat aggregation for large cpus David Miller

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1440914382-23126-1-git-send-email-raghavendra.kt@linux.vnet.ibm.com \
    --to=raghavendra.kt@linux.vnet.ibm.com \
    --cc=anton@au1.ibm.com \
    --cc=azhou@nicira.com \
    --cc=davem@davemloft.net \
    --cc=ebiederm@xmission.com \
    --cc=edumazet@google.com \
    --cc=hannes@stressinduktion.org \
    --cc=ipm@chirality.org.uk \
    --cc=jiri@resnulli.us \
    --cc=jmorris@namei.org \
    --cc=joe@perches.com \
    --cc=kaber@trash.net \
    --cc=kuznet@ms2.inr.ac.ru \
    --cc=linux-kernel@vger.kernel.org \
    --cc=nacc@linux.vnet.ibm.com \
    --cc=netdev@vger.kernel.org \
    --cc=nicolas.dichtel@6wind.com \
    --cc=serge.hallyn@canonical.com \
    --cc=srikar@linux.vnet.ibm.com \
    --cc=tom@herbertland.com \
    --cc=yoshfuji@linux-ipv6.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).