From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from one.firstfloor.org ([193.170.194.197]:53344 "EHLO one.firstfloor.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750722AbcBARpa (ORCPT ); Mon, 1 Feb 2016 12:45:30 -0500 Date: Mon, 1 Feb 2016 18:45:26 +0100 From: Andi Kleen To: Ingo Molnar Cc: Waiman Long , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Alexander Viro , linux-fsdevel@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, Peter Zijlstra , Andi Kleen , Scott J Norton , Douglas Hatch Subject: Re: [PATCH v2 3/3] vfs: Enable list batching for the superblock's inode list Message-ID: <20160201174526.GA3696@two.firstfloor.org> References: <1454095846-19628-1-git-send-email-Waiman.Long@hpe.com> <1454095846-19628-4-git-send-email-Waiman.Long@hpe.com> <20160130083557.GA31749@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160130083557.GA31749@gmail.com> Sender: linux-fsdevel-owner@vger.kernel.org List-ID: > I'm wondering, why are inode_sb_list_add()/del() even called for a presumably > reasonably well cached benchmark running on a system with enough RAM? Are these > perhaps thousands of temporary files, already deleted, and released when all the > file descriptors are closed as part of sys_exit()? > > If that's the case then I suspect an even bigger win would be not just to batch > the (sb-)global list fiddling, but to potentially turn the sb list into a > percpu_alloc() managed set of per CPU lists? It's a bigger change, but it could We had such a patch in the lock elision patchkit (It avoided a lot of cache line bouncing leading to aborts) https://git.kernel.org/cgit/linux/kernel/git/ak/linux-misc.git/commit/?h=hle315/combined&id=f1cf9e715a40f44086662ae3b29f123cf059cbf4 -Andi