From: Wu Fengguang <fengguang.wu@intel.com>
To: Dave Chinner <david@fromorbit.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Jan Kara <jack@suse.cz>, Christoph Hellwig <hch@infradead.org>,
"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 14/17] writeback: make writeback_control.nr_to_write straight
Date: Fri, 13 May 2011 13:28:06 +0800 [thread overview]
Message-ID: <20110513052806.GE8016@localhost> (raw)
In-Reply-To: <20110512231759.GM19446@dastard>
On Fri, May 13, 2011 at 07:18:00AM +0800, Dave Chinner wrote:
> On Thu, May 12, 2011 at 09:57:20PM +0800, Wu Fengguang wrote:
> > Pass struct wb_writeback_work all the way down to writeback_sb_inodes(),
> > and initialize the struct writeback_control there.
> >
> > struct writeback_control is basically designed to control writeback of a
> > single file, but we keep abuse it for writing multiple files in
> > writeback_sb_inodes() and its callers.
> >
> > It immediately clean things up, e.g. suddenly wbc.nr_to_write vs
> > work->nr_pages starts to make sense, and instead of saving and restoring
> > pages_skipped in writeback_sb_inodes it can always start with a clean
> > zero value.
> >
> > It also makes a neat IO pattern change: large dirty files are now
> > written in the full 4MB writeback chunk size, rather than whatever
> > remained quota in wbc->nr_to_write.
> >
> > Proposed-by: Christoph Hellwig <hch@infradead.org>
> > Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
> > ---
> .....
> > @@ -543,34 +588,44 @@ static int writeback_sb_inodes(struct su
> > requeue_io(inode, wb);
> > continue;
> > }
> > -
> > __iget(inode);
> > + write_chunk = writeback_chunk_size(work);
> > + wbc.nr_to_write = write_chunk;
> > + wbc.pages_skipped = 0;
> > +
> > + writeback_single_inode(inode, wb, &wbc);
> >
> > - pages_skipped = wbc->pages_skipped;
> > - writeback_single_inode(inode, wb, wbc);
> > - if (wbc->pages_skipped != pages_skipped) {
> > + work->nr_pages -= write_chunk - wbc.nr_to_write;
> > + wrote += write_chunk - wbc.nr_to_write;
> > + if (wbc.pages_skipped) {
> > /*
> > * writeback is not making progress due to locked
> > * buffers. Skip this inode for now.
> > */
> > redirty_tail(inode, wb);
> > - }
> > + } else if (!(inode->i_state & I_DIRTY))
> > + wrote++;
>
> Oh, that's just ugly. Do that accounting via nr_to_write in
> writeback_single_inode() as I suggested earlier, please.
This is the more simple and reliable test "whether the inode is
cleaned" that does not rely on the return value of ->write_inode(),
as replied in the earlier email.
> > spin_unlock(&inode->i_lock);
> > spin_unlock(&wb->list_lock);
> > iput(inode);
> > cond_resched();
> > spin_lock(&wb->list_lock);
> > - if (wbc->nr_to_write <= 0)
> > - return 1;
> > + /*
> > + * bail out to wb_writeback() often enough to check
> > + * background threshold and other termination conditions.
> > + */
> > + if (wrote >= MAX_WRITEBACK_PAGES)
> > + break;
>
> Why do this so often? If you are writing large files, it will be
> once every writeback_single_inode() call that you bail. Seems rather
> inefficient to me to go back to the top level loop just to check for
> more work when we already know we have more work to do because
> there's still inodes on b_io....
(answering the below comments together)
For large files, it's exactly the same behavior as in the old
wb_writeback(), which sets .nr_to_write = MAX_WRITEBACK_PAGES.
So it's not "more inefficient" than the original code.
For balance_dirty_pages(), it may change behavior by splitting one
16MB write to four 4MB writes. However the good side could be less
throttle latency.
The fix is to do IO-less balance_dirty_pages() and do larger write
chunk size (around half write bandwidth). Then we get reasonable good
bail frequent as well as IO efficiency.
Thanks,
Fengguang
> > + if (work->nr_pages <= 0)
> > + break;
> > }
> > - /* b_io is empty */
> > - return 1;
> > + return wrote;
> > }
> >
> > -static void __writeback_inodes_wb(struct bdi_writeback *wb,
> > - struct writeback_control *wbc)
> > +static long __writeback_inodes_wb(struct bdi_writeback *wb,
> > + struct wb_writeback_work *work)
> > {
> > - int ret = 0;
> > + long wrote = 0;
> >
> > while (!list_empty(&wb->b_io)) {
> > struct inode *inode = wb_inode(wb->b_io.prev);
> > @@ -580,33 +635,34 @@ static void __writeback_inodes_wb(struct
> > requeue_io(inode, wb);
> > continue;
> > }
> > - ret = writeback_sb_inodes(sb, wb, wbc, false);
> > + wrote += writeback_sb_inodes(sb, wb, work);
> > drop_super(sb);
> >
> > - if (ret)
> > + if (wrote >= MAX_WRITEBACK_PAGES)
> > + break;
> > + if (work->nr_pages <= 0)
> > break;
>
> Same here.
>
> > }
> > /* Leave any unwritten inodes on b_io */
> > + return wrote;
> > }
> >
> > -void writeback_inodes_wb(struct bdi_writeback *wb,
> > - struct writeback_control *wbc)
> > +long writeback_inodes_wb(struct bdi_writeback *wb, long nr_pages)
> > {
> > + struct wb_writeback_work work = {
> > + .nr_pages = nr_pages,
> > + .sync_mode = WB_SYNC_NONE,
> > + .range_cyclic = 1,
> > + };
> > +
> > spin_lock(&wb->list_lock);
> > if (list_empty(&wb->b_io))
> > - queue_io(wb, wbc->older_than_this);
> > - __writeback_inodes_wb(wb, wbc);
> > + queue_io(wb, NULL);
> > + __writeback_inodes_wb(wb, &work);
> > spin_unlock(&wb->list_lock);
> > -}
> >
> > -/*
> > - * The maximum number of pages to writeout in a single bdi flush/kupdate
> > - * operation. We do this so we don't hold I_SYNC against an inode for
> > - * enormous amounts of time, which would block a userspace task which has
> > - * been forced to throttle against that inode. Also, the code reevaluates
> > - * the dirty each time it has written this many pages.
> > - */
> > -#define MAX_WRITEBACK_PAGES 1024
> > + return nr_pages - work.nr_pages;
> > +}
>
> And this change means we'll only ever write 1024 pages maximum per
> call to writeback_inodes_wb() when large files are present. that
> means:
>
> ....
> > @@ -562,17 +555,17 @@ static void balance_dirty_pages(struct a
> > * threshold otherwise wait until the disk writes catch
> > * up.
> > */
> > - trace_wbc_balance_dirty_start(&wbc, bdi);
> > + trace_balance_dirty_start(bdi);
> > if (bdi_nr_reclaimable > bdi_thresh) {
> > - writeback_inodes_wb(&bdi->wb, &wbc);
> > - pages_written += write_chunk - wbc.nr_to_write;
> > - trace_wbc_balance_dirty_written(&wbc, bdi);
> > + pages_written += writeback_inodes_wb(&bdi->wb,
> > + write_chunk);
> > + trace_balance_dirty_written(bdi, pages_written);
> > if (pages_written >= write_chunk)
> > break; /* We've done our duty */
> > }
> > - trace_wbc_balance_dirty_wait(&wbc, bdi);
> > __set_current_state(TASK_UNINTERRUPTIBLE);
> > io_schedule_timeout(pause);
> > + trace_balance_dirty_wait(bdi);
>
> We're going to get different throttling behaviour dependent on
> whether there are large dirty files present or not in cache....
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@fromorbit.com
next prev parent reply other threads:[~2011-05-13 5:28 UTC|newest]
Thread overview: 57+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-05-12 13:57 [PATCH 00/17] writeback fixes and cleanups for 2.6.40 (v2) Wu Fengguang
2011-05-12 13:57 ` [PATCH 01/17] writeback: introduce .tagged_sync for the WB_SYNC_NONE sync stage Wu Fengguang
2011-05-12 22:40 ` Dave Chinner
2011-05-13 2:56 ` Wu Fengguang
2011-05-13 10:17 ` Christoph Hellwig
2011-05-15 23:43 ` Dave Chinner
2011-05-16 5:39 ` Wu Fengguang
2011-05-19 21:17 ` Wu Fengguang
2011-05-12 13:57 ` [PATCH 02/17] writeback: update dirtied_when for synced inode to prevent livelock Wu Fengguang
2011-05-12 22:42 ` Dave Chinner
2011-05-13 3:08 ` Wu Fengguang
2011-05-19 21:31 ` Wu Fengguang
2011-05-23 13:14 ` Jan Kara
2011-05-24 3:03 ` Wu Fengguang
2011-05-12 13:57 ` [PATCH 03/17] writeback: introduce writeback_control.inodes_cleaned Wu Fengguang
2011-05-12 22:44 ` Dave Chinner
2011-05-13 3:36 ` Wu Fengguang
2011-05-15 23:50 ` Dave Chinner
2011-05-16 10:40 ` Christoph Hellwig
2011-05-16 11:14 ` Wu Fengguang
2011-05-12 13:57 ` [PATCH 04/17] writeback: try more writeback as long as something was written Wu Fengguang
2011-05-12 13:57 ` [PATCH 05/17] writeback: the kupdate expire timestamp should be a moving target Wu Fengguang
2011-05-12 13:57 ` [PATCH 06/17] writeback: sync expired inodes first in background writeback Wu Fengguang
2011-05-12 22:55 ` Dave Chinner
2011-05-16 13:00 ` Wu Fengguang
2011-05-12 13:57 ` [PATCH 07/17] writeback: refill b_io iff empty Wu Fengguang
2011-05-12 13:57 ` [PATCH 08/17] writeback: split inode_wb_list_lock into bdi_writeback.list_lock Wu Fengguang
2011-05-12 13:57 ` [PATCH 09/17] writeback: elevate queue_io() into wb_writeback() Wu Fengguang
2011-05-12 13:57 ` [PATCH 10/17] writeback: avoid extra sync work at enqueue time Wu Fengguang
2011-05-12 13:57 ` [PATCH 11/17] writeback: add bdi_dirty_limit() kernel-doc Wu Fengguang
2011-05-12 13:57 ` [PATCH 12/17] writeback: skip balance_dirty_pages() for in-memory fs Wu Fengguang
2011-05-16 10:43 ` Christoph Hellwig
2011-05-16 10:49 ` Wu Fengguang
2011-05-12 13:57 ` [PATCH 13/17] writeback: remove writeback_control.more_io Wu Fengguang
2011-05-12 14:25 ` Minchan Kim
2011-05-12 23:04 ` Dave Chinner
2011-05-13 5:03 ` Wu Fengguang
2011-05-15 23:54 ` Dave Chinner
2011-05-12 13:57 ` [PATCH 14/17] writeback: make writeback_control.nr_to_write straight Wu Fengguang
2011-05-12 14:56 ` Jan Kara
2011-05-12 23:18 ` Dave Chinner
2011-05-13 5:28 ` Wu Fengguang [this message]
2011-05-16 0:12 ` Dave Chinner
2011-05-16 12:05 ` Wu Fengguang
2011-05-12 13:57 ` [PATCH 15/17] writeback: remove .nonblocking and .encountered_congestion Wu Fengguang
2011-05-12 13:57 ` [PATCH 16/17] writeback: trace event writeback_single_inode Wu Fengguang
2011-05-12 23:20 ` Dave Chinner
2011-05-13 5:37 ` Wu Fengguang
2011-05-16 0:14 ` Dave Chinner
2011-05-16 12:21 ` Wu Fengguang
2011-05-12 13:57 ` [PATCH 17/17] writeback: trace event writeback_queue_io Wu Fengguang
-- strict thread matches above, loose matches on Subject: below --
2011-05-06 3:08 [PATCH 00/17] writeback fixes and cleanups for 2.6.40 Wu Fengguang
2011-05-06 3:08 ` [PATCH 14/17] writeback: make writeback_control.nr_to_write straight Wu Fengguang
2011-05-09 16:54 ` Jan Kara
2011-05-10 3:19 ` Wu Fengguang
2011-05-10 13:44 ` Jan Kara
2011-05-11 14:38 ` Wu Fengguang
2011-05-11 14:54 ` Jan Kara
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20110513052806.GE8016@localhost \
--to=fengguang.wu@intel.com \
--cc=akpm@linux-foundation.org \
--cc=david@fromorbit.com \
--cc=hch@infradead.org \
--cc=jack@suse.cz \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).