From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752902AbcBFX63 (ORCPT ); Sat, 6 Feb 2016 18:58:29 -0500 Received: from ipmail06.adl2.internode.on.net ([150.101.137.129]:30898 "EHLO ipmail06.adl2.internode.on.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750901AbcBFX61 (ORCPT ); Sat, 6 Feb 2016 18:58:27 -0500 X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: A2CDCAAmiLZWPBATLHleKAECgw9SbYJphXKdPwEBAQEBAQaLZoVEhAcdhWoEAgKBIU0BAQEBAQEHAQEBAUE/hEIBAQQ6HCMQCAMYCSUPBSUDBxoTiBoPvhcBAQEBAQUCARkEGIUyhAR7glSBLAIRAYRYBZZ1hUyHe458RI16gxKBSCguhxoJF4EZAQEB Date: Sun, 7 Feb 2016 10:57:44 +1100 From: Dave Chinner To: Waiman Long Cc: Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Alexander Viro , linux-fsdevel@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, Peter Zijlstra , Andi Kleen , Scott J Norton , Douglas Hatch Subject: Re: [PATCH v2 1/3] lib/list_batch: A simple list insertion/deletion batching facility Message-ID: <20160206235744.GI31407@dastard> References: <1454095846-19628-1-git-send-email-Waiman.Long@hpe.com> <1454095846-19628-2-git-send-email-Waiman.Long@hpe.com> <20160201004708.GQ20456@dastard> <56B2893C.4030609@hpe.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <56B2893C.4030609@hpe.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Feb 03, 2016 at 06:11:56PM -0500, Waiman Long wrote: > On 01/31/2016 07:47 PM, Dave Chinner wrote: > >So at what point does simply replacing the list_head with a list_lru > >become more efficient than this batch processing (i.e. > >https://lkml.org/lkml/2015/3/10/660)? The list_lru isn't a great > >fit for the inode list (doesn't need any of the special LRU/memcg > >stuff https://lkml.org/lkml/2015/3/16/261) but it will tell us if, > >like Ingo suggested, moving more towards a generic per-cpu list > >would provide better overall performance... > > I will take a look at the list_lru patch to see if that help. As for > the per-cpu list, I tried that and it didn't quite work out. OK, see my last email as to why Andi's patch didn't change anything. The list_lru implementation has a list per node, a lock per node, and each item is placed on the list for the node it is physically allocated from. Hence for local workloads, the list/lock that is accessed for add/remove should be local to the node and hence should reduce cache line contention mostly to within a single node. Cheers, Dave. -- Dave Chinner david@fromorbit.com