From mboxrd@z Thu Jan 1 00:00:00 1970 From: Wu Fengguang Subject: Re: [PATCH 0/7] writeback: avoid touching dirtied_when on blocked inodes Date: Sat, 22 Oct 2011 15:46:07 +0800 Message-ID: <20111022074607.GA4720@localhost> References: <20111020152240.751936131@intel.com> <20111020232116.GB20542@quack.suse.cz> <20111021104049.GA3784@localhost> <20111021195448.GA10166@quack.suse.cz> <20111022031135.GA4823@localhost> <20111022053851.GA23033@localhost> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: "linux-fsdevel@vger.kernel.org" , Dave Chinner , Christoph Hellwig , Andrew Morton , LKML To: Jan Kara Return-path: Content-Disposition: inline In-Reply-To: <20111022053851.GA23033@localhost> Sender: linux-kernel-owner@vger.kernel.org List-Id: linux-fsdevel.vger.kernel.org On Sat, Oct 22, 2011 at 01:38:51PM +0800, Wu Fengguang wrote: > On Sat, Oct 22, 2011 at 11:11:35AM +0800, Wu Fengguang wrote: > > > > btw, with the I_SYNC case converted, it's actually no longer necessary > > > > to keep a standalone b_more_io_wait. It should still be better to keep > > > > the list and the above error check for catching possible errors and > > > > the flexibility of adding policies like "don't retry possible blocked > > > > inodes in N seconds as long as there are other inodes to work with". > > > > > > > > The below diff only intends to show the _possibility_ to remove > > > > b_more_io_wait: > > > Good observation. So should we introduce b_more_io_wait in the end? We > > > could always introduce it when the need for some more complicated policy > > > comes... > > > > > > > I have no problem removing it if you liked it more. Anyway, let me > > test the idea out first (just kicked off the tests). > > When removing b_more_io_wait, performance is slightly dropped > comparing to the full more_io_wait patchset. > > 3.1.0-rc9-ioless-full-more_io_wait-next-20111014+ 3.1.0-rc9-ioless-full-more_io_wait-x-next-20111014+ > ------------------------ ------------------------ > 45.30 +6.3% 48.14 thresh=1G/ext3-1dd-4k-8p-4096M-1024M:10-X > 48.23 -2.0% 47.27 thresh=1G/ext4-100dd-4k-8p-4096M-1024M:10-X > 54.21 -2.6% 52.80 thresh=1G/ext4-10dd-4k-8p-4096M-1024M:10-X > 56.07 -0.3% 55.91 thresh=1G/ext4-1dd-4k-8p-4096M-1024M:10-X > 45.12 -5.8% 42.49 thresh=1G/xfs-100dd-4k-8p-4096M-1024M:10-X > 53.94 -1.2% 53.27 thresh=1G/xfs-10dd-4k-8p-4096M-1024M:10-X > 55.66 -0.1% 55.63 thresh=1G/xfs-1dd-4k-8p-4096M-1024M:10-X > 358.53 -0.8% 355.51 TOTAL write_bw > > I'll try to reduce the changes and retest. Unfortunately, the reduced patches 1-4 + I_SYNC change + remove requeue_more_io_wait combination still performances noticeably worse: 3.1.0-rc9-ioless-full-next-20111014+ 3.1.0-rc9-ioless-full-more_io_wait-x2-next-20111014+ ------------------------ ------------------------ 49.84 -7.9% 45.91 thresh=1G/ext4-100dd-4k-8p-4096M-1024M:10-X 56.03 -7.2% 52.01 thresh=1G/ext4-10dd-4k-8p-4096M-1024M:10-X 57.42 -1.7% 56.45 thresh=1G/ext4-1dd-4k-8p-4096M-1024M:10-X 45.74 -2.8% 44.48 thresh=1G/xfs-100dd-4k-8p-4096M-1024M:10-X 54.19 -4.8% 51.57 thresh=1G/xfs-10dd-4k-8p-4096M-1024M:10-X 55.93 -2.2% 54.70 thresh=1G/xfs-1dd-4k-8p-4096M-1024M:10-X 319.14 -4.4% 305.12 TOTAL write_bw Thanks, Fengguang