From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wm0-f67.google.com ([74.125.82.67]:36194 "EHLO mail-wm0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752114AbcA3IgC (ORCPT ); Sat, 30 Jan 2016 03:36:02 -0500 Date: Sat, 30 Jan 2016 09:35:57 +0100 From: Ingo Molnar To: Waiman Long Cc: Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Alexander Viro , linux-fsdevel@vger.kernel.org, x86@kernel.org, linux-kernel@vger.kernel.org, Peter Zijlstra , Andi Kleen , Scott J Norton , Douglas Hatch Subject: Re: [PATCH v2 3/3] vfs: Enable list batching for the superblock's inode list Message-ID: <20160130083557.GA31749@gmail.com> References: <1454095846-19628-1-git-send-email-Waiman.Long@hpe.com> <1454095846-19628-4-git-send-email-Waiman.Long@hpe.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1454095846-19628-4-git-send-email-Waiman.Long@hpe.com> Sender: linux-fsdevel-owner@vger.kernel.org List-ID: * Waiman Long wrote: > The inode_sb_list_add() and inode_sb_list_del() functions in the vfs > layer just perform list addition and deletion under lock. So they can > use the new list batching facility to speed up the list operations > when many CPUs are trying to do it simultaneously. > > In particular, the inode_sb_list_del() function can be a performance > bottleneck when large applications with many threads and associated > inodes exit. With an exit microbenchmark that creates a large number > of threads, attachs many inodes to them and then exits. The runtimes > of that microbenchmark with 1000 threads before and after the patch > on a 4-socket Intel E7-4820 v3 system (48 cores, 96 threads) were > as follows: > > Kernel Elapsed Time System Time > ------ ------------ ----------- > Vanilla 4.4 65.29s 82m14s > Patched 4.4 45.69s 49m44s > > The elapsed time and the reported system time were reduced by 30% > and 40% respectively. That's pretty impressive! I'm wondering, why are inode_sb_list_add()/del() even called for a presumably reasonably well cached benchmark running on a system with enough RAM? Are these perhaps thousands of temporary files, already deleted, and released when all the file descriptors are closed as part of sys_exit()? If that's the case then I suspect an even bigger win would be not just to batch the (sb-)global list fiddling, but to potentially turn the sb list into a percpu_alloc() managed set of per CPU lists? It's a bigger change, but it could speed up a lot of other temporary file intensive usecases as well, not just batched delete. Thanks, Ingo