From mboxrd@z Thu Jan 1 00:00:00 1970 From: Chris Mason Subject: Re: [PATCH] writeback: plug writeback at a high level Date: Mon, 17 Jun 2013 10:34:57 -0400 Message-ID: <20130617143457.9127.66403@localhost.localdomain> References: <1371264650-21931-1-git-send-email-david@fromorbit.com> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 8BIT To: Dave Chinner , "linux-fsdevel@vger.kernel.org" Return-path: Received: from dkim2.fusionio.com ([66.114.96.54]:38077 "EHLO dkim2.fusionio.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750903Ab3FQOfB convert rfc822-to-8bit (ORCPT ); Mon, 17 Jun 2013 10:35:01 -0400 Received: from mx2.fusionio.com (unknown [10.101.1.160]) by dkim2.fusionio.com (Postfix) with ESMTP id E42309A0403 for ; Mon, 17 Jun 2013 08:35:00 -0600 (MDT) In-Reply-To: <1371264650-21931-1-git-send-email-david@fromorbit.com> Sender: linux-fsdevel-owner@vger.kernel.org List-ID: Quoting Dave Chinner (2013-06-14 22:50:50) > From: Dave Chinner > > Doing writeback on lots of little files causes terrible IOPS storms > because of the per-mapping writeback plugging we do. This > essentially causes imeediate dispatch of IO for each mapping, > regardless of the context in which writeback is occurring. > > IOWs, running a concurrent write-lots-of-small 4k files using fsmark > on XFS results in a huge number of IOPS being issued for data > writes. Metadata writes are sorted and plugged at a high level by > XFS, so aggregate nicely into large IOs. However, data writeback IOs > are dispatched in individual 4k IOs, even when the blocks of two > consecutively written files are adjacent. > > Test VM: 8p, 8GB RAM, 4xSSD in RAID0, 100TB sparse XFS filesystem, > metadata CRCs enabled. > > Kernel: 3.10-rc5 + xfsdev + my 3.11 xfs queue (~70 patches) I'm a little worried about this one, just because of the impact on ssds from plugging in the aio code: https://lkml.org/lkml/2011/12/13/326 How exactly was your FS created? I'll try it here. -chris