From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752641AbbIMNM4 (ORCPT ); Sun, 13 Sep 2015 09:12:56 -0400 Received: from mx0b-00082601.pphosted.com ([67.231.153.30]:16326 "EHLO mx0b-00082601.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752029AbbIMNMy (ORCPT ); Sun, 13 Sep 2015 09:12:54 -0400 Date: Sun, 13 Sep 2015 09:12:44 -0400 From: Chris Mason To: Linus Torvalds , Josef Bacik , LKML , linux-fsdevel , Dave Chinner , Neil Brown , Jan Kara , Christoph Hellwig Subject: Re: [PATCH] fs-writeback: drop wb->list_lock during blk_finish_plug() Message-ID: <20150913131244.GA15926@ret.masoncoding.com> Mail-Followup-To: Chris Mason , Linus Torvalds , Josef Bacik , LKML , linux-fsdevel , Dave Chinner , Neil Brown , Jan Kara , Christoph Hellwig References: <55F33C2B.1010508@fb.com> <20150911231636.GC4150@ret.masoncoding.com> <20150912230027.GE4150@ret.masoncoding.com> <20150912234632.GF4150@ret.masoncoding.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline In-Reply-To: <20150912234632.GF4150@ret.masoncoding.com> User-Agent: Mutt/1.5.23.1 (2014-03-12) X-Originating-IP: [192.168.52.123] X-Proofpoint-Spam-Reason: safe X-FB-Internal: Safe X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:5.14.151,1.0.33,0.0.0000 definitions=2015-09-10_07:2015-09-09,2015-09-10,1970-01-01 signatures=0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, Sep 12, 2015 at 07:46:32PM -0400, Chris Mason wrote: > I don't think the XFS numbers can be trusted too much since it was > basically bottlenecked behind that single pegged CPU. It was bouncing > around and I couldn't quite track it down to a process name (or perf > profile). I'll do more runs Monday, but I was able to grab a perf profile of the pegged XFS CPU. It was just the writeback worker thread, and it hit btrfs differently because we defer more of this stuff to endio workers, effectively spreading it out over more CPUs. With 4 mount points intead of 2, XFS goes from 140K files/sec to 250K. Here's one of the profiles, but it bounced around a lot so I wouldn't use this to actually tune anything: 11.42% kworker/u82:61 [kernel.kallsyms] [k] _raw_spin_lock | ---_raw_spin_lock | |--83.43%-- xfs_extent_busy_trim | xfs_alloc_compute_aligned | | | |--99.92%-- xfs_alloc_ag_vextent_near | | xfs_alloc_ag_vextent | | xfs_alloc_vextent | | xfs_bmap_btalloc | | xfs_bmap_alloc | | xfs_bmapi_write | | xfs_iomap_write_allocate | | xfs_map_blocks | | xfs_vm_writepage | | __writepage | | write_cache_pages | | generic_writepages | | xfs_vm_writepages | | do_writepages | | __writeback_single_inode | | writeback_sb_inodes | | __writeback_inodes_wb | | wb_writeback | | wb_do_writeback | | wb_workfn | | process_one_work | | worker_thread | | kthread | | ret_from_fork | --0.08%-- [...] |