linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Alexander Duyck <alexander.duyck@gmail.com>
To: Jesper Dangaard Brouer <brouer@redhat.com>,
	netdev@vger.kernel.org, akpm@linux-foundation.org
Cc: linux-mm@kvack.org, aravinda@linux.vnet.ibm.com,
	Christoph Lameter <cl@linux.com>,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	iamjoonsoo.kim@lge.com
Subject: Re: [RFC PATCH 0/3] Network stack, first user of SLAB/kmem_cache bulk free API.
Date: Fri, 4 Sep 2015 11:09:21 -0700	[thread overview]
Message-ID: <55E9DE51.7090109@gmail.com> (raw)
In-Reply-To: <20150904165944.4312.32435.stgit@devil>

On 09/04/2015 10:00 AM, Jesper Dangaard Brouer wrote:
> During TX DMA completion cleanup there exist an opportunity in the NIC
> drivers to perform bulk free, without introducing additional latency.
>
> For an IPv4 forwarding workload the network stack is hitting the
> slowpath of the kmem_cache "slub" allocator.  This slowpath can be
> mitigated by bulk free via the detached freelists patchset.
>
> Depend on patchset:
>   http://thread.gmane.org/gmane.linux.kernel.mm/137469
>
> Kernel based on MMOTM tag 2015-08-24-16-12 from git repo:
>   git://git.kernel.org/pub/scm/linux/kernel/git/mhocko/mm.git
>   Also contains Christoph's patch "slub: Avoid irqoff/on in bulk allocation"
>
>
> Benchmarking: Single CPU IPv4 forwarding UDP (generator pktgen):
>   * Before: 2043575 pps
>   * After : 2090522 pps
>   * Improvements: +46947 pps and -10.99 ns
>
> In the before case, perf report shows slub free hits the slowpath:
>   1.98%  ksoftirqd/6  [kernel.vmlinux]  [k] __slab_free.isra.72
>   1.29%  ksoftirqd/6  [kernel.vmlinux]  [k] cmpxchg_double_slab.isra.71
>   0.95%  ksoftirqd/6  [kernel.vmlinux]  [k] kmem_cache_free
>   0.95%  ksoftirqd/6  [kernel.vmlinux]  [k] kmem_cache_alloc
>   0.20%  ksoftirqd/6  [kernel.vmlinux]  [k] __cmpxchg_double_slab.isra.60
>   0.17%  ksoftirqd/6  [kernel.vmlinux]  [k] ___slab_alloc.isra.68
>   0.09%  ksoftirqd/6  [kernel.vmlinux]  [k] __slab_alloc.isra.69
>
> After the slowpath calls are almost gone:
>   0.22%  ksoftirqd/6  [kernel.vmlinux]  [k] __cmpxchg_double_slab.isra.60
>   0.18%  ksoftirqd/6  [kernel.vmlinux]  [k] ___slab_alloc.isra.68
>   0.14%  ksoftirqd/6  [kernel.vmlinux]  [k] __slab_free.isra.72
>   0.14%  ksoftirqd/6  [kernel.vmlinux]  [k] cmpxchg_double_slab.isra.71
>   0.08%  ksoftirqd/6  [kernel.vmlinux]  [k] __slab_alloc.isra.69
>
>
> Extra info, tuning SLUB per CPU structures gives further improvements:
>   * slub-tuned: 2124217 pps
>   * patched increase: +33695 pps and  -7.59 ns
>   * before  increase: +80642 pps and -18.58 ns
>
> Tuning done:
>   echo 256 > /sys/kernel/slab/skbuff_head_cache/cpu_partial
>   echo 9   > /sys/kernel/slab/skbuff_head_cache/min_partial
>
> Without SLUB tuning, same performance comes with kernel cmdline "slab_nomerge":
>   * slab_nomerge: 2121824 pps
>
> Test notes:
>   * Notice very fast CPU i7-4790K CPU @ 4.00GHz
>   * gcc version 4.8.3 20140911 (Red Hat 4.8.3-9) (GCC)
>   * kernel 4.1.0-mmotm-2015-08-24-16-12+ #271 SMP
>   * Generator pktgen UDP single flow (pktgen_sample03_burst_single_flow.sh)
>   * Tuned for forwarding:
>    - unloaded netfilter modules
>    - Sysctl settings:
>    - net/ipv4/conf/default/rp_filter = 0
>    - net/ipv4/conf/all/rp_filter = 0
>    - (Forwarding performance is affected by early demux)
>    - net/ipv4/ip_early_demux = 0
>    - net.ipv4.ip_forward = 1
>    - Disabled GRO on NICs
>    - ethtool -K ixgbe3 gro off tso off gso off
>
> ---

This is an interesting start.  However I feel like it might work better 
if you were to create a per-cpu pool for skbs that could be freed and 
allocated in NAPI context.  So for example we already have 
napi_alloc_skb, why not just add a napi_free_skb and then make the array 
of objects to be freed part of a pool that could be used for either 
allocation or freeing?  If the pool runs empty you just allocate 
something like 8 or 16 new skb heads, and if you fill it you just free 
half of the list?

- Alex

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2015-09-04 18:09 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-08-24  0:58 [PATCH V2 0/3] slub: introducing detached freelist Jesper Dangaard Brouer
2015-08-24  0:58 ` [PATCH V2 1/3] slub: extend slowpath __slab_free() to handle bulk free Jesper Dangaard Brouer
2015-08-24  0:59 ` [PATCH V2 2/3] slub: optimize bulk slowpath free by detached freelist Jesper Dangaard Brouer
2015-08-24  0:59 ` [PATCH V2 3/3] slub: build detached freelist with look-ahead Jesper Dangaard Brouer
2015-09-04 17:00 ` [RFC PATCH 0/3] Network stack, first user of SLAB/kmem_cache bulk free API Jesper Dangaard Brouer
2015-09-04 17:00   ` [RFC PATCH 1/3] net: introduce kfree_skb_bulk() user of kmem_cache_free_bulk() Jesper Dangaard Brouer
2015-09-04 18:47     ` Tom Herbert
2015-09-07  8:41       ` Jesper Dangaard Brouer
2015-09-07 16:25         ` Tom Herbert
2015-09-07 20:14           ` Jesper Dangaard Brouer
2015-09-08 21:01     ` David Miller
2015-09-04 17:01   ` [RFC PATCH 2/3] net: NIC helper API for building array of skbs to free Jesper Dangaard Brouer
2015-09-04 17:01   ` [RFC PATCH 3/3] ixgbe: bulk free SKBs during TX completion cleanup cycle Jesper Dangaard Brouer
2015-09-04 18:09   ` Alexander Duyck [this message]
2015-09-04 18:55     ` [RFC PATCH 0/3] Network stack, first user of SLAB/kmem_cache bulk free API Christoph Lameter
2015-09-04 20:39       ` Alexander Duyck
2015-09-04 23:45         ` Christoph Lameter
2015-09-05 11:18           ` Jesper Dangaard Brouer
2015-09-08 17:32             ` Christoph Lameter
2015-09-09 12:59               ` Jesper Dangaard Brouer
2015-09-09 14:08                 ` Christoph Lameter
2015-09-07  8:16     ` Jesper Dangaard Brouer
2015-09-07 21:23       ` Alexander Duyck
2015-09-16 10:02   ` Experiences with slub bulk use-case for network stack Jesper Dangaard Brouer
2015-09-16 15:13     ` Christoph Lameter
2015-09-17 20:17       ` Jesper Dangaard Brouer
2015-09-17 23:57         ` Christoph Lameter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=55E9DE51.7090109@gmail.com \
    --to=alexander.duyck@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=aravinda@linux.vnet.ibm.com \
    --cc=brouer@redhat.com \
    --cc=cl@linux.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=linux-mm@kvack.org \
    --cc=netdev@vger.kernel.org \
    --cc=paulmck@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).