From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Zhang, Yanmin" Subject: Re: [PATCH 0/11] Per-bdi writeback flusher threads v9 Date: Fri, 05 Jun 2009 09:14:47 +0800 Message-ID: <1244164487.2560.146.camel@ymzhang> References: <1243511204-2328-1-git-send-email-jens.axboe@oracle.com> <20090604152040.GA6007@nowhere> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: QUOTED-PRINTABLE Cc: Jens Axboe , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, tytso@mit.edu, chris.mason@oracle.com, david@fromorbit.com, hch@infradead.org, akpm@linux-foundation.org, jack@suse.cz, richard@rsk.demon.co.uk, damien.wyart@free.fr To: Frederic Weisbecker Return-path: Received: from mga09.intel.com ([134.134.136.24]:24627 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751576AbZFEBOh (ORCPT ); Thu, 4 Jun 2009 21:14:37 -0400 In-Reply-To: <20090604152040.GA6007@nowhere> Sender: linux-fsdevel-owner@vger.kernel.org List-ID: On Thu, 2009-06-04 at 17:20 +0200, Frederic Weisbecker wrote: > Hi, >=20 >=20 > On Thu, May 28, 2009 at 01:46:33PM +0200, Jens Axboe wrote: > > Hi, > >=20 > > Here's the 9th version of the writeback patches. Changes since v8: > I've just tested it on UP in a single disk. >=20 > I've run two parallels dbench tests on two partitions and > tried it with this patch and without. I also tested V9 with multiple-dbench workload by starting multiple dbench tasks and every task has 4 processes to do I/O on one partition = (file system). Mostly I use JBODs which have 7/11/13 disks. I didn't find result regression between =EF=BB=BFvanilla and V9 kernel = on this workload. >=20 > I used 30 proc each during 600 secs. >=20 > You can see the result in attachment. > And also there: >=20 > http://kernel.org/pub/linux/kernel/people/frederic/dbench.pdf > http://kernel.org/pub/linux/kernel/people/frederic/bdi-writeback-hda1= =2Elog > http://kernel.org/pub/linux/kernel/people/frederic/bdi-writeback-hda3= =2Elog > http://kernel.org/pub/linux/kernel/people/frederic/pdflush-hda1.log > http://kernel.org/pub/linux/kernel/people/frederic/pdflush-hda3.log >=20 >=20 > As you can see, bdi writeback is faster than pdflush on hda1 and slow= er > on hda3. But, well that's not the point. >=20 > What I can observe here is the difference on the standard deviation > for the rate between two parallel writers on a same device (but > two different partitions, then superblocks). >=20 > With pdflush, the distributed rate is much better balanced than > with bdi writeback in a single device. >=20 > I'm not sure why. Is there something in these patches that makes > several bdi flusher threads for a same bdi not well balanced > between them? >=20 > Frederic. -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel= " in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html