From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jesper Dangaard Brouer Subject: Re: [PATCH mm] slab: implement bulking for SLAB allocator Date: Tue, 8 Sep 2015 17:54:51 +0200 Message-ID: <20150908175451.2ce83a0b@redhat.com> References: <20150908142147.22804.37717.stgit@devil> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Cc: iamjoonsoo.kim@lge.com, akpm@linux-foundation.org, linux-mm@kvack.org, netdev@vger.kernel.org, brouer@redhat.com To: Christoph Lameter Return-path: In-Reply-To: Sender: owner-linux-mm@kvack.org List-Id: netdev.vger.kernel.org On Tue, 8 Sep 2015 10:22:32 -0500 (CDT) Christoph Lameter wrote: > On Tue, 8 Sep 2015, Jesper Dangaard Brouer wrote: > > > Also notice how well bulking maintains the performance when the bulk > > size increases (which is a soar spot for the slub allocator). > > Well you are not actually completing the free action in SLAB. This is > simply queueing the item to be freed later. Also was this test done on a > NUMA system? Alien caches at some point come into the picture. This test was a single CPU benchmark with no congestion or concurrency. But the code was compiled with CONFIG_NUMA=y. I don't know the slAb code very well, but the kmem_cache_node->list_lock looks like a scalability issue. I guess that is what you are referring to ;-) -- Best regards, Jesper Dangaard Brouer MSc.CS, Sr. Network Kernel Developer at Red Hat Author of http://www.iptv-analyzer.org LinkedIn: http://www.linkedin.com/in/brouer -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org