linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Chris Mason <chris.mason@oracle.com>
To: Vivek Goyal <vgoyal@redhat.com>
Cc: linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	linux-ext4 <linux-ext4@vger.kernel.org>, xfs <xfs@oss.sgi.com>,
	jack <jack@suse.cz>, axboe <axboe@kernel.dk>
Subject: Re: buffered writeback torture program
Date: Thu, 21 Apr 2011 07:09:11 -0400	[thread overview]
Message-ID: <1303383609-sup-2330@think> (raw)
In-Reply-To: <20110420220626.GL29872@redhat.com>

Excerpts from Vivek Goyal's message of 2011-04-20 18:06:26 -0400:
> On Wed, Apr 20, 2011 at 02:23:29PM -0400, Chris Mason wrote:
> > Hi everyone,
> > 
> > I dug out my old fsync latency tester to make sure Jens' new plugging
> > code hadn't caused regressions.  This started off as a program Ted wrote
> > during the firefox dark times, and I added some more code to saturate
> > spindles with random IO.
> > 
> > The basic idea is:
> > 
> > 1) make a nice big sequential 8GB file
> > 2) fork a process doing random buffered writes inside that file
> > 3) overwrite a second 4K file in a loop, doing fsyncs as you go.
> > 
> > The test program times how long each write and fsync take in step three.
> > The idea is that if we have problems with concurrent buffered writes and
> > fsyncs, then all of our fsyncs will get stuck behind the random IO
> > writeback and our latencies will be high.
> > 
> > For a while, xfs, btrfs and ext4 did really well at this test.  Our
> > fsync latencies were very small and we all sent down synchronous IO that
> > the elevator dispatched straight to the drive.
> > 
> > Things have changed though, both xfs and ext4 have grown code to do
> > dramatically more IO than write_cache_pages has asked for (I'm pretty
> > sure I told everyone this was a good idea at the time).  When doing
> > sequential writes, this is a great idea.  When doing random IO, it leads
> > to unbound stalls in balance_dirty_pages.
> > 
> > Here's an example run on xfs:
> > 
> > # fsync-tester
> > setting up random write file
> > done setting up random write file
> > starting fsync run
> > starting random io!
> > write time 0.0009s fsync time: 2.0142s
> > write time 128.9305s fsync time: 2.6046s
> > run done 2 fsyncs total, killing random writer
> > 
> > In this case the 128s spent in write was on a single 4K overwrite on a
> > 4K file.
> 
> Chris, You seem to be doing 1MB (32768*32) writes on fsync file instead of 4K.
> I changed the size to 4K still not much difference though.

Whoops, I had that change made locally but didn't get it copied out.

> 
> Once the program has exited because of high write time, i restarted it and
> this time I don't see high write times.

I see this for some of my runs as well.

> 
> First run
> ---------
> # ./a.out 
> setting up random write file
> done setting up random write file
> starting fsync run
> starting random io!
> write time: 0.0006s fsync time: 0.3400s
> write time: 63.3270s fsync time: 0.3760s
> run done 2 fsyncs total, killing random writer
> 
> Second run
> ----------
> # ./a.out 
> starting fsync run
> starting random io!
> write time: 0.0006s fsync time: 0.5359s
> write time: 0.0007s fsync time: 0.3559s
> write time: 0.0009s fsync time: 0.3113s
> write time: 0.0008s fsync time: 0.4336s
> write time: 0.0009s fsync time: 0.3780s
> write time: 0.0008s fsync time: 0.3114s
> write time: 0.0009s fsync time: 0.3225s
> write time: 0.0009s fsync time: 0.3891s
> write time: 0.0009s fsync time: 0.4336s
> write time: 0.0009s fsync time: 0.4225s
> write time: 0.0009s fsync time: 0.4114s
> write time: 0.0007s fsync time: 0.4004s
> 
> Not sure why would that happen.
> 
> I am wondering why pwrite/fsync process was throttled. It did not have any
> pages in page cache and it shouldn't have hit the task dirty limits. Does that
> mean per task dirty limit logic does not work or I am completely missing
> the root cause of the problem.

I haven't traced it to see.  This test box only has 1GB of ram, so the
dirty ratios can be very tight.

-chris

  reply	other threads:[~2011-04-21 11:11 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-04-20 18:23 buffered writeback torture program Chris Mason
2011-04-20 22:06 ` Vivek Goyal
2011-04-21 11:09   ` Chris Mason [this message]
2011-04-21 15:25     ` Chris Mason
2011-04-21 15:35       ` Vivek Goyal
2011-04-21 16:55       ` Jan Kara
2011-04-21 16:57         ` Chris Mason
2011-04-21 20:44           ` Jan Kara
2011-04-21  8:32 ` Christoph Hellwig
2011-04-21 17:34   ` Chris Mason
2011-04-21 17:41     ` Christoph Hellwig
2011-04-21 17:59       ` Andreas Dilger
2011-04-21 18:02         ` Christoph Hellwig
2011-04-21 18:02           ` Chris Mason
2011-04-21 18:08             ` Christoph Hellwig
2011-04-21 18:29               ` Chris Mason
2011-04-21 18:43                 ` Andreas Dilger
2011-04-21 18:47                   ` Chris Mason
2011-04-21 18:00       ` Chris Mason

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1303383609-sup-2330@think \
    --to=chris.mason@oracle.com \
    --cc=axboe@kernel.dk \
    --cc=jack@suse.cz \
    --cc=linux-ext4@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=vgoyal@redhat.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).