linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Theodore Tso <tytso@mit.edu>
To: Christoph Hellwig <hch@infradead.org>
Cc: linux-mm@kvack.org,
	Ext4 Developers List <linux-ext4@vger.kernel.org>,
	linux-fsdevel@vger.kernel.org, chris.mason@oracle.com,
	jens.axboe@oracle.com
Subject: Re: [PATCH, RFC] vm: Add an tuning knob for vm.max_writeback_pages
Date: Sun, 30 Aug 2009 23:08:15 -0400	[thread overview]
Message-ID: <20090831030815.GD20822@mit.edu> (raw)
In-Reply-To: <20090830222710.GA9938@infradead.org>

On Sun, Aug 30, 2009 at 06:27:10PM -0400, Christoph Hellwig wrote:
> I'm don't think tuning it on a per-filesystem basis is a good idea,
> we had to resort to this for 2.6.30 as a quick hack, and we will ged
> rid of it again in 2.6.31 one way or another.  I personally think we
> should fight this cancer of per-filesystem hacks in the writeback code
> as much as we can.  Right now people keep adding tuning hacks for
> specific workloads there, and at least all the modern filesystems (ext4,
> btrfs and XFS) have very similar requirements to the writeback code,
> that is give the filesystem as much as possible to write at the same
> time to do intelligent decisions based on that.  The VM writeback code
> fails horribly at that right now.

Yep; and Jens' patch doesn't change that.  It is still sending writes
out to the filesystem a piddling 1024 pages at a time.

> My stance is to wait for this until about -rc2 at which points Jens'
> code is hopefully in and we can start doing all the fine-tuning,
> including lots of benchmarking.

Well, I've ported my patch so it applies on top of Jens' per-bdi
patch.  It seems to be clearly needed; Jens, would you agree to add it
to your per-bdi patch series?  We can choose a different default if
you like, but making MAX_WRITEBACK_PAGES tunable seems to be clearly
necessary.

By the way, while I was testing my patch on top of v13 of the per-bdi
patches, I found something *very* curious.  I did a test where ran the
following commands on a freshly mkfs'ed ext4 filesystem:

	dd if=/dev/zero of=test1 bs=1024k count=128
	dd if=/dev/zero of=test2 bs=1024k count=128
	sync

I traced the calls to ext4_da_writepages() using ftrace, and found this:

      flush-8:16-1829  [001]    23.416351: ext4_da_writepages: dev sdb ino 12 nr_t_write 32759 pages_skipped 0 range_start 0 range_end 0 nonblocking 0 for_kupdate 0 for_reclaim 0 for_writepages 1 range_cyclic 1
      flush-8:16-1829  [000]    25.939354: ext4_da_writepages: dev sdb ino 12 nr_t_write 32768 pages_skipped 0 range_start 0 range_end 0 nonblocking 0 for_kupdate 0 for_reclaim 0 for_writepages 1 range_cyclic 1
      flush-8:16-1829  [000]    25.939486: ext4_da_writepages: dev sdb ino 13 nr_t_write 32759 pages_skipped 0 range_start 134180864 range_end 9223372036854775807 nonblocking 0 for_kupdate 0 for_reclaim 0 for_writepages 1 range_cyclic 1
      flush-8:16-1829  [000]    27.055687: ext4_da_writepages: dev sdb ino 12 nr_t_write 32768 pages_skipped 0 range_start 0 range_end 0 nonblocking 0 for_kupdate 0 for_reclaim 0 for_writepages 1 range_cyclic 1
      flush-8:16-1829  [000]    27.055691: ext4_da_writepages: dev sdb ino 13 nr_t_write 32768 pages_skipped 0 range_start 0 range_end 0 nonblocking 0 for_kupdate 0 for_reclaim 0 for_writepages 1 range_cyclic 1
      flush-8:16-1829  [000]    27.878708: ext4_da_writepages: dev sdb ino 13 nr_t_write 32768 pages_skipped 0 range_start 0 range_end 0 nonblocking 0 for_kupdate 0 for_reclaim 0 for_writepages 1 range_cyclic 1

The *first* time the per-bdi code called writepages on the second file
(test2, inode #13), range_start was 134180864 (which, curiously
enough, is 4096*32759, which was the value of nr_to_write passed to
ext4_da_writepages).  Given that the inode only had 32768 pages, the
fact that apparently *some* codepath called ext4_da_writepages
starting at logical block 32759, with nr_to_write set to 32759, seems
very curious indeed.  That doesn't seem right at all.  It's late, so I
won't try to trace it down now; plus which it's your code so I figure
you can probably figure it out faster....

     	     	       	    	    	       	  - Ted

  reply	other threads:[~2009-08-31  3:08 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-08-30  2:54 [PATCH, RFC] vm: Add an tuning knob for vm.max_writeback_pages Theodore Ts'o
2009-08-30 16:52 ` Christoph Hellwig
2009-08-30 18:17   ` Theodore Tso
2009-08-30 22:27     ` Christoph Hellwig
2009-08-31  3:08       ` Theodore Tso [this message]
2009-08-31 10:29         ` Jens Axboe
2009-08-31 10:47           ` Jens Axboe
2009-08-31 12:37             ` Theodore Tso
2009-08-31 15:54             ` Theodore Tso
2009-08-31 20:36               ` Jens Axboe
2009-08-31 21:03             ` Theodore Tso
2009-09-01  7:57               ` Aneesh Kumar K.V
2009-09-01  9:17               ` Jens Axboe
2009-09-01 18:00     ` Chris Mason
2009-09-01 20:30       ` Theodore Tso

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20090831030815.GD20822@mit.edu \
    --to=tytso@mit.edu \
    --cc=chris.mason@oracle.com \
    --cc=hch@infradead.org \
    --cc=jens.axboe@oracle.com \
    --cc=linux-ext4@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).