linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jan Kara <jack@suse.cz>
To: Dave Chinner <david@fromorbit.com>
Cc: Jan Kara <jack@suse.cz>,
	linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org,
	akpm@linux-foundation.org, davej@redhat.com,
	viro@zeniv.linux.org.uk, glommer@parallels.com
Subject: Re: [PATCH 01/11] writeback: plug writeback at a high level
Date: Thu, 1 Aug 2013 10:34:47 +0200	[thread overview]
Message-ID: <20130801083447.GA19219@quack.suse.cz> (raw)
In-Reply-To: <20130801054805.GO7118@dastard>

On Thu 01-08-13 15:48:05, Dave Chinner wrote:
> On Wed, Jul 31, 2013 at 04:40:19PM +0200, Jan Kara wrote:
> > On Wed 31-07-13 14:15:40, Dave Chinner wrote:
> > > From: Dave Chinner <dchinner@redhat.com>
> > > 
> > > Doing writeback on lots of little files causes terrible IOPS storms
> > > because of the per-mapping writeback plugging we do. This
> > > essentially causes imeediate dispatch of IO for each mapping,
> > > regardless of the context in which writeback is occurring.
> > > 
> > > IOWs, running a concurrent write-lots-of-small 4k files using fsmark
> > > on XFS results in a huge number of IOPS being issued for data
> > > writes.  Metadata writes are sorted and plugged at a high level by
> > > XFS, so aggregate nicely into large IOs. However, data writeback IOs
> > > are dispatched in individual 4k IOs, even when the blocks of two
> > > consecutively written files are adjacent.
> > > 
> > > Test VM: 8p, 8GB RAM, 4xSSD in RAID0, 100TB sparse XFS filesystem,
> > > metadata CRCs enabled.
> > > 
> > > Kernel: 3.10-rc5 + xfsdev + my 3.11 xfs queue (~70 patches)
> > > 
> > > Test:
> > > 
> > > $ ./fs_mark  -D  10000  -S0  -n  10000  -s  4096  -L  120  -d
> > > /mnt/scratch/0  -d  /mnt/scratch/1  -d  /mnt/scratch/2  -d
> > > /mnt/scratch/3  -d  /mnt/scratch/4  -d  /mnt/scratch/5  -d
> > > /mnt/scratch/6  -d  /mnt/scratch/7
> > > 
> > > Result:
> > > 
> > > 		wall	sys	create rate	Physical write IO
> > > 		time	CPU	(avg files/s)	 IOPS	Bandwidth
> > > 		-----	-----	------------	------	---------
> > > unpatched	6m56s	15m47s	24,000+/-500	26,000	130MB/s
> > > patched		5m06s	13m28s	32,800+/-600	 1,500	180MB/s
> > > improvement	-26.44%	-14.68%	  +36.67%	-94.23%	+38.46%
> > > 
> > > If I use zero length files, this workload at about 500 IOPS, so
> > > plugging drops the data IOs from roughly 25,500/s to 1000/s.
> > > 3 lines of code, 35% better throughput for 15% less CPU.
> > > 
> > > The benefits of plugging at this layer are likely to be higher for
> > > spinning media as the IO patterns for this workload are going make a
> > > much bigger difference on high IO latency devices.....
> > > 
> > > Signed-off-by: Dave Chinner <dchinner@redhat.com>
> >   Just one question: Won't this cause a regression when files are say 2 MB
> > large? Then we generate maximum sized requests for these files with
> > per-inode plugging anyway and they will unnecessarily sit in the plug list
> > until the plug list gets full (that is after 16 requests). Granted it
> > shouldn't be too long but with fast storage it may be measurable...
> 
> Latency of IO dispatch only matters for the initial IOs being
> queued. This, however, is not a latency sensitive IO path -
> writeback is our bulk throughput IO engine, and in those cases low
> latency dispatch is precisely what we don't want. We want to
> optimise IO patterns for maximum *bandwidth*, not minimal latency.
> 
> The problem is that fast storage with immediate dispatch and dep
> queues can keep ahead of IO dispatch, preventing throughput
> optimisations like IO aggregation from being made because there is
> never any IO queued to aggregate. That's why I'm seeing a couple of
> orders of magnitude higher IOPS than I should. Sure, the hardware
> can do that, but it's not the *most efficient* method of dispatching
> background IO.
> 
> Allowing IOs a chance to aggregate in the scheduler for a short
> while because dispatch allows existing bulk throughput optimisations
> to be made to the IO stream, and as we can see, where a delayed
> allocation filesystem is optimised for adjacent allocation
> across sequentially written inodes such oppportunites for IO
> aggregation make a big difference to performance.
  Yeah, I understand the reasoning but sometimes experimental results
differ from theory :)

> So, to test your 2MB IO case, I ran a fsmark test using 40,000
> 2MB files instead of 10 million 4k files.
> 
> 		wall time	IOPS	BW
> mmotm		170s		1000	350MB/s
> patched		167s		1000	350MB/s
> 
> The IO profiles are near enough to be identical, and the wall time
> is basically the same.
> 
> 
> I just don't see any particular concern about larger IOs and initial
> dispatch latency here from either a theoretical or an observed POV.
> Indeed, I haven't seen a performance degradation as a result of this
> patch in any of the testing I've done since I first posted it...
  Thanks for doing the test! So I'm fine with this patch. You can add:
Reviewed-by: Jan Kara <jack@suse.cz>

> > Now if we have maximum sized request in the plug list, maybe we could just
> > dispatch it right away but that's another story.
> 
> That, in itself is potentially an issue, too, as it prevents seek
> minimisation optimisations from being made when we batch up multiple
> IOs on the plug list...
  Good point.

								Honza

-- 
Jan Kara <jack@suse.cz>
SUSE Labs, CR

  reply	other threads:[~2013-08-01  8:34 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-07-31  4:15 [PATCH 00/11] Sync and VFS scalability improvements Dave Chinner
2013-07-31  4:15 ` [PATCH 01/11] writeback: plug writeback at a high level Dave Chinner
2013-07-31 14:40   ` Jan Kara
2013-08-01  5:48     ` Dave Chinner
2013-08-01  8:34       ` Jan Kara [this message]
2013-07-31  4:15 ` [PATCH 02/11] inode: add IOP_NOTHASHED to avoid inode hash lock in evict Dave Chinner
2013-07-31 14:44   ` Jan Kara
2013-08-01  8:12   ` Christoph Hellwig
2013-08-02  1:11     ` Dave Chinner
2013-08-02 14:32       ` Christoph Hellwig
2013-07-31  4:15 ` [PATCH 03/11] inode: convert inode_sb_list_lock to per-sb Dave Chinner
2013-07-31 14:48   ` Jan Kara
2013-07-31  4:15 ` [PATCH 04/11] sync: serialise per-superblock sync operations Dave Chinner
2013-07-31 15:12   ` Jan Kara
2013-07-31  4:15 ` [PATCH 05/11] inode: rename i_wb_list to i_io_list Dave Chinner
2013-07-31 14:51   ` Jan Kara
2013-07-31  4:15 ` [PATCH 06/11] bdi: add a new writeback list for sync Dave Chinner
2013-07-31 15:11   ` Jan Kara
2013-08-01  5:59     ` Dave Chinner
2013-07-31  4:15 ` [PATCH 07/11] writeback: periodically trim the writeback list Dave Chinner
2013-07-31 15:15   ` Jan Kara
2013-08-01  6:16     ` Dave Chinner
2013-08-01  9:03       ` Jan Kara
2013-07-31  4:15 ` [PATCH 08/11] inode: convert per-sb inode list to a list_lru Dave Chinner
2013-08-01  8:19   ` Christoph Hellwig
2013-08-02  1:06     ` Dave Chinner
2013-07-31  4:15 ` [PATCH 09/11] fs: Use RCU lookups for inode cache Dave Chinner
2013-07-31  4:15 ` [PATCH 10/11] list_lru: don't need node lock in list_lru_count_node Dave Chinner
2013-07-31  4:15 ` [PATCH 11/11] list_lru: don't lock during add/del if unnecessary Dave Chinner
2013-07-31  6:48 ` [PATCH 00/11] Sync and VFS scalability improvements Sedat Dilek
2013-08-01  6:19   ` Dave Chinner
2013-08-01  6:31     ` Sedat Dilek

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20130801083447.GA19219@quack.suse.cz \
    --to=jack@suse.cz \
    --cc=akpm@linux-foundation.org \
    --cc=davej@redhat.com \
    --cc=david@fromorbit.com \
    --cc=glommer@parallels.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).