From mboxrd@z Thu Jan 1 00:00:00 1970 From: Dave Chinner Subject: Re: [PATCH] fs-writeback: drop wb->list_lock during blk_finish_plug() Date: Fri, 18 Sep 2015 10:37:35 +1000 Message-ID: <20150918003735.GR3902@dastard> References: <20150916195806.GD29530@quack.suse.cz> <20150916200012.GB8624@ret.masoncoding.com> <20150916220704.GM3902@dastard> <20150917003738.GN3902@dastard> <20150917021453.GO3902@dastard> <20150917224230.GF8624@ret.masoncoding.com> <20150917235647.GG8624@ret.masoncoding.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii To: Chris Mason , Linus Torvalds , Jan Kara , Josef Bacik , LKML , linux-fsdevel , Neil Brown , Christoph Hellwig , Tejun Heo Return-path: Content-Disposition: inline In-Reply-To: <20150917235647.GG8624@ret.masoncoding.com> Sender: linux-kernel-owner@vger.kernel.org List-Id: linux-fsdevel.vger.kernel.org On Thu, Sep 17, 2015 at 07:56:47PM -0400, Chris Mason wrote: > On Thu, Sep 17, 2015 at 04:08:19PM -0700, Linus Torvalds wrote: > > On Thu, Sep 17, 2015 at 3:42 PM, Chris Mason wrote: > > > > > > Playing around with the plug a little, most of the unplugs are coming > > > from the cond_resched_lock(). Not really sure why we are doing the > > > cond_resched() there, we should be doing it before we retake the lock > > > instead. > > > > > > This patch takes my box (with dirty thresholds at 1.5GB/3GB) from 195K > > > files/sec up to 213K. Average IO size is the same as 4.3-rc1. > > > > Ok, so at least for you, part of the problem really ends up being that > > there's a mix of the "synchronous" unplugging (by the actual explicit > > "blk_finish_plug(&plug);") and the writeback that is handed off to > > kblockd_workqueue. > > > > I'm not seeing why that should be an issue. Sure, there's some CPU > > overhead to context switching, but I don't see that it should be that > > big of a deal. It may well change the dispatch order of enough IOs for it to be significant on an IO bound device. > > I wonder if there is something more serious wrong with the kblockd_workqueue. > > I'm driving the box pretty hard, it's right on the line between CPU > bound and IO bound. So I've got 32 fs_mark processes banging away and > 32 CPUs (16 really, with hyperthreading). I'm only using 8 threads right now, so I have ~6-7 idle CPUs on this workload. Hence if it's CPU load related, I probably won't see any change in behaviour. > They are popping in and out of balance_dirty_pages() so I have high CPU > utilization alternating with high IO wait times. There no reads at all, > so all of these waits are for buffered writes. > > People in balance_dirty_pages are indirectly waiting on the unplug, so > maybe the context switch overhead on a loaded box is enough to explain > it. We've definitely gotten more than 9% by inlining small synchronous > items in btrfs in the past, but those were more explicitly synchronous. > > I know it's painfully hand wavy. I don't see any other users of the > kblockd workqueues, and the perf profiles don't jump out at me. I'll > feel better about the patch if Dave confirms any gains. In outright performance on my test machine, the difference in files/s is noise. However, the consistency looks to be substantially improved and the context switch rate is now running at under 3,000/sec. Numbers, including the std deviation of the files/s number output during the fsmark run (averaged across 3 separate benahmark runs): files/s std-dev wall time 4.3-rc1-noplug 34400 2.0e04 5m25s 4.3-rc1 56600 2.3e04 3m23s 4.3-rc1-flush 56079 1.4e04 3m14s std-dev is well down, and the improvement in wall time is large enough to be significant. Looks good to me. Cheers, Dave. -- Dave Chinner david@fromorbit.com